Automatica, Vol.42, No.4, 523-533, 2006
Optimization over state feedback policies for robust control with constraints
This paper is concerned with the optimal control of linear discrete-time systems subject to unknown but bounded state disturbances and mixed polytopic constraints on the state and input. It is shown that the class of admissible affine state feedback control policies with knowledge of prior states is equivalent to the class of admissible feedback policies that are affine functions of the past disturbance sequence. This implies that a broad class of constrained finite horizon robust and optimal control problems, where the optimization is over affine state feedback policies, can be solved in a computationally efficient fashion using convex optimization methods. This equivalence result is used to design a robust receding horizon control (RHC) state feedback policy such that the closed-loop system is input-to-state stable (ISS) and the constraints are satisfied for all time and all allowable disturbance sequences. The cost to be minimized in the associated finite horizon optimal control problem is quadratic in the disturbance-free state and input sequences. The value of the receding horizon control law can be calculated at each sample instant using a single, tractable and convex quadratic program (QP) if the disturbance set is polytopic, or a tractable second-order cone program (SOCP) if the disturbance set is given by a 2-norm bound. (c) 2006 Elsevier Ltd. All rights reserved.
Keywords:robust control;constraint satisfaction;robust optimization;predictive control;optimal control