IEEE Transactions on Automatic Control, Vol.56, No.2, 333-344, 2011
On State Aggregation to Approximate Complex Value Functions in Large-Scale Markov Decision Processes
Many small electronic devices such as cell phones and wireless sensors have restrictive memory space, computing power, and battery. The pervasive applications of these devices in industry, military, and our daily lives require simple policies that are easy to implement, and can be executed in a reasonably short time. The Markov decision process (MDP) provides a general framework for these policy optimization problems with complexity constraint. In many cases, we can use a powerful computer to find the optimal (or a good) policy and the value function first, and then approximate by a simple one. The approximation usually depends on heuristics or experiences because the relationship between the complexity of a function and the approximation error is not clear in general. In this paper we assume the optimal value function is known (or a reasonably good estimate is available) and consider how to approximate a complex value function. Due to the broad application of state aggregation in the large-scale MDP, we focus on piecewise constant approximate value functions and use the number of aggregated states to measure the complexity of a value function. We quantify how the complexity of a value function affects the approximation error. When the optimal value function is known for sure we develop an algorithm that finds the best simple state aggregation within polynomial time. When we have estimates of the optimal value function, we apply ordinal optimization to find good simple state aggregations with high probability. The algorithms are demonstrated on a node activation policy optimization problem in wireless sensor network. We hope this work can shed some insight on how to find simple policies with good performances.
Keywords:Descriptive complexity;discrete event dynamic systems;Markov decision processes (MDPs);ordinal optimization;state aggregation