SIAM Journal on Control and Optimization, Vol.34, No.1, 311-328, 1996
Approximations in Dynamic Zero-Sum Games .1.
We develop a unifying approach for approximating a "limit" zero-sum game by a sequence of approximating games. We discuss both the convergence of the values and the convergence of optimal (or "almost" optimal) strategies. Moreover, based on optimal policies for the limit game, we construct policies which are almost optimal for the approximating games. We then apply the general framework to state approximations of stochastic games, to convergence of finite horizon problems to infinite horizon problems, and to convergence in the discount factor and in the immediate reward.