IEEE Transactions on Automatic Control, Vol.58, No.8, 2001-2012, 2013
Random Coordinate Descent Algorithms for Multi-Agent Convex Optimization Over Networks
In this paper, we develop randomized block-coordinate descent methods for minimizing multi-agent convex optimization problems with linearly coupled constraints over networks and prove that they obtain in expectation an accurate solution in at most O(1/lambda(2)(Q)epsilon) iterations, where lambda(2)(Q) is the second smallest eigenvalue of a matrix Q that is defined in terms of the probabilities and the number of blocks. However, the computational complexity per iteration of our methods is much simpler than the one of a method based on full gradient information and each iteration can be computed in a completely distributed way. We focus on how to choose the probabilities to make these randomized algorithms to converge as fast as possible and we arrive at solving a sparse SDP. Analysis for rate of convergence in probability is also provided. For strongly convex functions our distributed algorithms converge linearly. We also extend the main algorithm to a more general random coordinate descent method and to problems with more general linearly coupled constraints. Preliminary numerical tests confirm that on very large optimization problems our method is much more numerically efficient than methods based on full gradient.
Keywords:Distributed control;linearly coupled constraints;random coordinate descent;rate of convergence;resource allocation problems;smooth convex optimization