[2601.21972] Learning Decentralized LLM Collaboration with Multi-Agent Actor Critic
Summary
The paper presents Multi-Agent Actor-Critic (MAAC) methods for optimizing decentralized collaboration among large language models (LLMs), addressing challenges in existing reinforcement learning approaches.
Why It Matters
As LLMs become increasingly prevalent, optimizing their collaboration in decentralized settings is crucial for enhancing performance and flexibility. This research contributes to the understanding of how MAAC methods can improve LLM interactions, which has implications for various applications in AI.
Key Takeaways
- Decentralized LLM collaboration can enhance flexibility and performance.
- MAAC methods can outperform traditional Monte Carlo approaches in certain scenarios.
- Two MAAC approaches, CoLLM-CC and CoLLM-DC, were proposed and tested.
- CoLLM-DC shows promise in short-horizon tasks but struggles with long-horizon tasks.
- The research provides insights into optimizing LLM collaboration for diverse applications.
Computer Science > Artificial Intelligence arXiv:2601.21972 (cs) [Submitted on 29 Jan 2026 (v1), last revised 13 Feb 2026 (this version, v3)] Title:Learning Decentralized LLM Collaboration with Multi-Agent Actor Critic Authors:Shuo Liu, Tianle Chen, Ryan Amiri, Christopher Amato View a PDF of the paper titled Learning Decentralized LLM Collaboration with Multi-Agent Actor Critic, by Shuo Liu and 3 other authors View PDF HTML (experimental) Abstract:Recent work has explored optimizing LLM collaboration through Multi-Agent Reinforcement Learning (MARL). However, most MARL fine-tuning approaches rely on predefined execution protocols, which often require centralized execution. Decentralized LLM collaboration is more appealing in practice, as agents can run inference in parallel with flexible deployments. Also, current approaches use Monte Carlo methods for fine-tuning, which suffer from high variance and thus require more samples to train effectively. Actor-critic methods are prevalent in MARL for dealing with these issues, so we developed Multi-Agent Actor-Critic (MAAC) methods to optimize decentralized LLM collaboration. In this paper, we analyze when and why these MAAC methods are beneficial. We propose 2 MAAC approaches, \textbf{CoLLM-CC} with a \textbf{C}entralized \textbf{C}ritic and \textbf{CoLLM-DC} with \textbf{D}ecentralized \textbf{C}ritics. Our experiments across writing, coding, and game-playing domains show that Monte Carlo methods and CoLLM-DC can achieve perfo...