화학공학소재연구정보센터
IEEE Transactions on Automatic Control, Vol.54, No.6, 1243-1253, 2009
Regret and Convergence Bounds for a Class of Continuum-Armed Bandit Problems
We consider a class of multi-armed bandit problems where the set of available actions can be mapped to a convex, compact region of R-d, sometimes denoted as the "continuum-armed bandit" problem. The paper establishes bounds on the efficiency of any arm-selection procedure under certain conditions on the class of possible underlying reward functions. Both finite-time lower bounds on the growth rate of the regret, as well as asymptotic upper bounds on the rates of convergence of the selected control values to the optimum are derived. We explicitly characterize the dependence of these convergence rates (in the minimal rate of variation of the mean reward function in a neighborhood of the optimal control. The bounds can be used to demonstrate the asymptotic optimality of the Kiefer-Wolfowitz method of stochastic approximation with regard to a large class of possible mean reward functions.