IEEE Transactions on Automatic Control, Vol.64, No.4, 1426-1439, 2019
Learning Generalized Nash Equilibria in a Class of Convex Games
We consider multiagent decision making where each agent optimizes its convex cost function subject to individual and coupling constraints. The constraint sets are compact convex subsets of a Euclidean space. To learn Nash equilibria, we propose a novel distributed payoff-based algorithm, where each agent uses information only about its cost value and the constraint value with its associated dual multiplier. We prove convergence of this algorithm to a Nash equilibrium, under the assumption that the game admits a strictly convex potential function. In the absence of coupling constraints, we prove convergence to Nash equilibria under significantly weaker assumptions, not requiring a potential function. Namely, strict monotonicity of the game mapping is sufficient for convergence. We also derive the convergence rate of the algorithm for strongly monotone game maps.