IEEE Transactions on Automatic Control, Vol.48, No.5, 758-769, 2003
Semi-Markov decision problems and performance sensitivity analysis
Recent research indicates that Markov decision processes (MDPs) can be viewed from a sensitivity point of view; and perturbation analysis (PA), MDPs, and reinforcement learning (RL) are three closely related areas in optimization of discrete-event dynamic systems that can be modeled as Markov processes. The goal of this paper is two-fold. First, we develop PA theory for semi-Markov processes (SMPs); and second, we extend the aforementioned results about the relation among PA, MDP, and RL to SMPs. In particular, we show that performance sensitivity formulas and policy iteration algorithms of semi-Markov decision processes (SMDPs) can be derived based on performance, potential and realization matrix. Both the long-run average and discounted-cost problems are considered; this approach provides a unified framework for both problems, and the long-run average problem corresponds to the discounted. factor. being zero. The results indicate that performance sensitivities and optimization depend only on first-order statistics. Single sample path-based implementations are discussed.
Keywords:discounted Poisson equations;discrete-event dynamic systems (DEDS);Lyapunov equations;Markov decision processes (MDPs);perturbation analysis (PA);perturbation realization;Poisson equations;policy iteration;potentials;reinforcement learning (RL)