Industrial & Engineering Chemistry Research, Vol.53, No.19, 8106-8119, 2014
Data-based Suboptimal Neuro-control Design with Reinforcement Learning for Dissipative Spatially Distributed Processes
For many real complicated industrial processes, the accurate system model is often unavailable. In this paper, we consider the partially unknown spatially distributed processes (SDPs) which are described by general highly dissipative nonlinear partial differential equations (PDEs) and develop a data-based adaptive suboptimal neuro-control method by introducing the thought of reinforcement learning (RL). First, based on the empirical eigenfunctions computed with Karhunen-Loeve decomposition, singular perturbation theory is used to derive a reduced-order model of an ordinary differential equation that represents the dominant dynamics of the SDP. Second, the Hamilton-Jacobi-Bellman (HJB) approach is used for the suboptimal control design, and the thought of policy iteration (PI) is introduced for online learning of the solution of the HJB equation, and its convergence is established. Third, a neural network (NN) is employed to approximate the cost function in the PI procedure, and a NN weight tuning algorithm based on the gradient descent method is proposed. We prove that the developed online adaptive suboptimal neuro-controller can ensure that the original closed-loop PDE system is semiglobally uniformly ultimately bounded. Finally, the developed data-based control method is applied to a nonlinear diffusion-reaction process, and the achieved results demonstrate its effectiveness.