[2602.12386] Provably Convergent Actor-Critic in Risk-averse MARL
Summary
This paper presents a novel Actor-Critic algorithm for risk-averse Multi-Agent Reinforcement Learning (MARL), demonstrating global convergence with finite-sample guarantees in complex game environments.
Why It Matters
The study addresses the challenge of learning stationary policies in infinite-horizon general-sum Markov games, a significant issue in MARL. By introducing a risk-averse approach, it enhances the practical applicability of reinforcement learning in multi-agent settings, making it relevant for researchers and practitioners in AI and game theory.
Key Takeaways
- Introduces a two-timescale Actor-Critic algorithm for risk-averse MARL.
- Proves global convergence with finite-sample guarantees.
- Empirically shows superior convergence properties compared to risk-neutral methods.
- Utilizes Risk-averse Quantal response Equilibria (RQE) for improved learning.
- Addresses a fundamental challenge in computing stationary strategies in multi-agent systems.
Computer Science > Multiagent Systems arXiv:2602.12386 (cs) [Submitted on 12 Feb 2026] Title:Provably Convergent Actor-Critic in Risk-averse MARL Authors:Yizhou Zhang, Eric Mazumdar View a PDF of the paper titled Provably Convergent Actor-Critic in Risk-averse MARL, by Yizhou Zhang and Eric Mazumdar View PDF Abstract:Learning stationary policies in infinite-horizon general-sum Markov games (MGs) remains a fundamental open problem in Multi-Agent Reinforcement Learning (MARL). While stationary strategies are preferred for their practicality, computing stationary forms of classic game-theoretic equilibria is computationally intractable -- a stark contrast to the comparative ease of solving single-agent RL or zero-sum games. To bridge this gap, we study Risk-averse Quantal response Equilibria (RQE), a solution concept rooted in behavioral game theory that incorporates risk aversion and bounded rationality. We demonstrate that RQE possesses strong regularity conditions that make it uniquely amenable to learning in MGs. We propose a novel two-timescale Actor-Critic algorithm characterized by a fast-timescale actor and a slow-timescale critic. Leveraging the regularity of RQE, we prove that this approach achieves global convergence with finite-sample guarantees. We empirically validate our algorithm in several environments to demonstrate superior convergence properties compared to risk-neutral baselines. Subjects: Multiagent Systems (cs.MA); Computer Science and Game Theory (cs.G...