[2602.20078] Descent-Guided Policy Gradient for Scalable Cooperative Multi-Agent Learning
Summary
The paper presents the Descent-Guided Policy Gradient (DG-PG) method, which enhances cooperative multi-agent reinforcement learning by reducing gradient variance, improving scalability, and achieving faster convergence in complex environments.
Why It Matters
This research addresses a critical limitation in multi-agent reinforcement learning, where noise from multiple agents can hinder performance. By providing a method that reduces this noise, DG-PG has the potential to significantly improve the efficiency and effectiveness of multi-agent systems in various applications, including cloud computing and transportation.
Key Takeaways
- DG-PG reduces gradient variance from Θ(N) to O(1), enhancing learning stability.
- The method preserves equilibria in cooperative games, ensuring consistent outcomes.
- Achieves agent-independent sample complexity of O(1/ε), improving efficiency.
- Demonstrated effective convergence in heterogeneous cloud scheduling tasks.
- Outperforms existing methods like MAPPO and IPPO in scalability.
Computer Science > Multiagent Systems arXiv:2602.20078 (cs) [Submitted on 23 Feb 2026] Title:Descent-Guided Policy Gradient for Scalable Cooperative Multi-Agent Learning Authors:Shan Yang, Yang Liu View a PDF of the paper titled Descent-Guided Policy Gradient for Scalable Cooperative Multi-Agent Learning, by Shan Yang and 1 other authors View PDF HTML (experimental) Abstract:Scaling cooperative multi-agent reinforcement learning (MARL) is fundamentally limited by cross-agent noise: when agents share a common reward, the actions of all $N$ agents jointly determine each agent's learning signal, so cross-agent noise grows with $N$. In the policy gradient setting, per-agent gradient estimate variance scales as $\Theta(N)$, yielding sample complexity $\mathcal{O}(N/\epsilon)$. We observe that many domains -- cloud computing, transportation, power systems -- have differentiable analytical models that prescribe efficient system states. In this work, we propose Descent-Guided Policy Gradient (DG-PG), a framework that constructs noise-free per-agent guidance gradients from these analytical models, decoupling each agent's gradient from the actions of all others. We prove that DG-PG reduces gradient variance from $\Theta(N)$ to $\mathcal{O}(1)$, preserves the equilibria of the cooperative game, and achieves agent-independent sample complexity $\mathcal{O}(1/\epsilon)$. On a heterogeneous cloud scheduling task with up to 200 agents, DG-PG converges within 10 episodes at every tested s...