IEEE Transactions on Automatic Control, Vol.42, No.12, 1663-1680, 1997
The Policy Iteration Algorithm for Average Reward Markov Decision-Processes with General State-Space
The average cost optimal control problem is addressed for Markov decision processes with unbounded cost, It is found that the policy iteration algorithm generates a sequence of policies which are c-regular (a strong stability condition), where c is the cost function under consideration, This result only requires the existence of an initial c-regular policy and an irreducibility condition on the state space, Furthermore, under these conditions the sequence of relative value functions generated by the algorithm is bounded from below and "nearly" decreasing, from which it follows that the algorithm is always convergent. Under further conditions, it is shown that the algorithm does compute a solution to the optimality equations and hence an optimal average cost policy, These results provide elementary criteria for the existence of optimal policies for Markov decision processes with unbounded cost and recover known results for the standard linear-quadratic-Gaussian problem. When these results are specialized to specific applications they reveal new structure for optimal policies, In particular, in the control of multiclass queueing networks, it is found that there is a close connection between optimization of the network and optimal control of a far simpler fluid network model.