IEEE Transactions on Automatic Control, Vol.61, No.4, 994-1009, 2016
Newton-Raphson Consensus for Distributed Convex Optimization
We address the problem of distributed unconstrained convex optimization under separability assumptions, i.e., the framework where each agent of a network is endowed with a local private multidimensional convex cost, is subject to communication constraints, and wants to collaborate to compute the minimizer of the sum of the local costs. We propose a design methodology that combines average consensus algorithms and separation of time-scales ideas. This strategy is proved, under suitable hypotheses, to be globally convergent to the true minimizer. Intuitively, the procedure lets the agents distributedly compute and sequentially update an approximated Newton-Raphson direction by means of suitable average consensus ratios. We show with numerical simulations that the speed of convergence of this strategy is comparable with alternative optimization strategies such as the Alternating Direction Method of Multipliers. Finally, we propose some alternative strategies which trade-off communication and computational requirements with convergence speed.
Keywords:Consensus;distributed optimization;multi-agent systems;Newton-Raphson methods;smooth functions;unconstrained convex optimization