Automatica, Vol.90, 196-203, 2018
Optimal distributed stochastic mirror descent for strongly convex optimization
In this paper we consider convergence rate problems for stochastic strongly-convex optimization in the non-Euclidean sense with a constraint set over a time-varying multi-agent network. We propose two efficient non-Euclidean stochastic subgradient descent algorithms based on the Bregman divergence as distance-measuring function rather than the Euclidean distances that were employed by the standard distributed stochastic projected subgradient algorithms. For distributed optimization of non-smooth and strongly convex functions whose only stochastic subgradients are available, the first algorithm recovers the best previous known rate of O(ln(T)/T) (where T is the total number of iterations). The second algorithm is an epoch variant of the first algorithm that attains the optimal convergence rate of O(1/T), matching that of the best previously known centralized stochastic subgradient algorithm. Finally, we report some simulation results to illustrate the proposed algorithms. (C) 2018 Elsevier Ltd. All rights reserved.
Keywords:Distributed stochastic optimization;Strong convexity;Non-Euclidean divergence;Mirror descent;Epoch gradient descent;Optimal convergence rate