IEEE Transactions on Automatic Control, Vol.40, No.7, 1199-1209, 1995
Machine Learning and Nonparametric Bandit Theory
In its most basic form, bandit theory is concerned with the design problem of sequentially choosing members from a given collection of random variables so that the regret, i,e,, R(n) = Sigma(j)(mu* - mu(j))ET(n)(j), grows as slowly as possible with increasing n, Here mu(j) is the expected value of the bandit arm (i.e,, random variable) indexed by j, T-n, (j) is the number of times arm j has been selected in the first n decision stages, and mu* = sup(j) mu(j). The present paper contributes to the theory by considering the situation in which observations are dependent, To begin with, the dependency is presumed to depend only on past observations of the same arm, but later, we allow that it may be with respect to the entire past and that the set of arms is infinite, This brings queues and, more generally, controlled Markov processes into our purview, Thus our "black-box" methodology is suitable for the case when the only observables are cost values and, in particular, the probability structure and loss function are unknown to the designer. The conclusion of the analysis is that under lenient conditions, using algorithms prescribed herein, risk growth is commensurate with that in the simplest i,i,d, cases. Our methods represent an alternative to recent stochastic-approximation/perturbation-analysis ideas for tuning queues.
Keywords:OPTIMIZATION;PARAMETER