The particle filter is a computational mechanism for propagating empirical probability densities and updating them to incorporate new data using Bayes Rule via resampling. In a stochastic control context, the conditional density of the state is calculated in this way and then used for the construction of the next feedback control value. This latter calculation is in general intractable because it requires solution of the Stochastic Dynamic Programming Equation. So, simplifying and approximate methods are needed. The most dramatic of these is Certainty Equivalence Control, where a specific single “best” value of the state is selected and used without regard to the nature of the rest of the density, such as its variance; often this is chosen to be the conditional mean. Other choices will be presented and described. The complexity of moving from conditional state density to stochastic optimal control also limits the application to popular modern methods such as Model Predictive Control (MPC) because these rely on open-loop constrained optimization. The core issue is that stochastic optimal control is necessarily closed-loop and density based while MPC is an open-loop method in hiding. In the presentation, a middle path is developed which blends open and closed loop while maintaining computational tractability. Simulation in open- and closed-loop form a part of the procedure to select a best state value. Alongside this, the particle filter operates to provide the correct density for search.