IEEE Transactions on Automatic Control, Vol.62, No.1, 262-276, 2017
An Iterative Method for Nonlinear Stochastic Optimal Control Based on Path Integrals
This paper proposes a new iterative solution method for nonlinear stochastic optimal control problems based on path integral analysis. First, we provide an iteration law for solving a stochastic Hamilton-Jacobi-Bellman (SHJB) equation associated to this problem, which is a nonlinear partial differential equation (PDE) of second order. Each iteration procedure of the proposed method is represented by a Cauchy problem for a linear parabolic PDE, and its explicit solution is given by the Feynman-Kac formula. Second, we derive a suboptimal feedback controller at each iteration by using the path integral analysis. Third, the convergence property of the proposed method is investigated. Here, some conditions are provided so that the sequence of solutions for the proposed iteration converges, and the SHJB equation is satisfied. Finally, numerical simulations demonstrate the effectiveness of the proposed method.