화학공학소재연구정보센터
SIAM Journal on Control and Optimization, Vol.48, No.7, 4181-4223, 2010
UNIFORM RECURRENCE PROPERTIES OF CONTROLLED DIFFUSIONS AND APPLICATIONS TO OPTIMAL CONTROL
In this paper we address an open problem which was stated in [A. Arapostathis et al., SIAM J. Control Optim., 31 (1993), pp. 282-344] in the context of discrete-time controlled Markov chains with a compact action space. It asked whether the associated invariant probability distributions are necessarily tight if all stationary Markov policies are stable, in other words if the corresponding chains are positive recurrent. We answer this question affirmatively for controlled nondegenerate diffusions modeled by Ito stochastic differential equations. We apply the results to the ergodic control problem in its average formulation to obtain fairly general characterizations of optimality without resorting to blanket Lyapunov stability assumptions.