IEEE Transactions on Automatic Control, Vol.42, No.5, 674-690, 1997
An Analysis of Temporal-Difference Learning with Function Approximation
We discuss the temporal-difference learning algorithm, as applied to approximating the cost-to-go function of an infinite-horizon discounted Markov chain, The algorithm we analyze updates parameters of a linear function approximator online during a single endless trajectory of an irreducible aperiodic Markov chain with a finite or infinite state space, We present a proof of convergence (with probability one), a characterization of the limit of convergence, and a bound on the resulting approximation error, Furthermore, our analysis is based on a new line of reasoning that provides new intuition about the dynamics of temporal difference learning. In addition to proving new and stronger positive. results than those previously available, we identify tile significance of online updating and potential hazards associated with the use of nonlinear function approximators, First, we prove that divergence may occur when updates are not based on trajectories of the Markov chain, This bet reconciles positive and negative results that have been discussed in the literature, regarding the soundness of temporal-difference learning, Second, we present an example illustrating the possibility of divergence when temporal-difference learning is used in the presence of a nonlinear function approximator.
Keywords:CONVERGENCE;TD(LAMBDA)