화학공학소재연구정보센터
IEEE Transactions on Automatic Control, Vol.62, No.8, 3744-3757, 2017
Non-Convex Distributed Optimization
We study distributed non-convex optimization on a time-varying multi-agent network. Each node has access to its own smooth local cost function, and the collective goal is to minimize the sum of these functions. The perturbed push-sum algorithm was previously used for convex distributed optimization. We generalize the result obtained for the convex case to the case of non-convex functions. Under some additional technical assumptions on the gradients we prove the convergence of the distributed push-sum algorithm to some critical point of the objective function. By utilizing perturbations on the update process, we show the almost sure convergence of the perturbed dynamics to a local minimum of the global objective function, if the objective function has no saddle points. Our analysis shows that this perturbed procedure converges at a rate of O(1/t).