화학공학소재연구정보센터
IEEE Transactions on Automatic Control, Vol.63, No.4, 1105-1112, 2018
On the Convergence Time of Dual Subgradient Methods for Strongly Convex Programs
This paper studies the convergence time of dual gradient methods for general (possibly nondifferentiable) strongly convex programs. For general convex programs, the convergence time of dual subgradient/gradient methods with simple running averages (running averages started from iteration 0) is known to be O(1/epsilon(2)). This paper shows that the convergence time for general strongly convex programs is O(1/epsilon). This paper also considers a variation of the average scheme, called the sliding running averages, and shows that if the dual function of the strongly convex program is locally quadratic then the convergence time of the dual gradient method with sliding running averages is O(log(1/epsilon)). The convergence time analysis is further verified by numerical experiments.