[2512.03528] Multi-Agent Reinforcement Learning with Communication-Constrained Priors
About this article
Abstract page for arXiv paper 2512.03528: Multi-Agent Reinforcement Learning with Communication-Constrained Priors
Computer Science > Artificial Intelligence arXiv:2512.03528 (cs) [Submitted on 3 Dec 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:Multi-Agent Reinforcement Learning with Communication-Constrained Priors Authors:Guang Yang, Tianpei Yang, Jingwen Qiao, Yanqing Wu, Jing Huo, Xingguo Chen, Yang Gao View a PDF of the paper titled Multi-Agent Reinforcement Learning with Communication-Constrained Priors, by Guang Yang and Tianpei Yang and Jingwen Qiao and Yanqing Wu and Jing Huo and Xingguo Chen and Yang Gao View PDF HTML (experimental) Abstract:Communication is one of the effective means to improve the learning of cooperative policy in multi-agent systems. However, in most real-world scenarios, lossy communication is a prevalent issue. Existing multi-agent reinforcement learning with communication, due to their limited scalability and robustness, struggles to apply to complex and dynamic real-world environments. To address these challenges, we propose a generalized communication-constrained model to uniformly characterize communication conditions across different scenarios. Based on this, we utilize it as a learning prior to distinguish between lossy and lossless messages for specific scenarios. Additionally, we decouple the impact of lossy and lossless messages on distributed decision-making, drawing on a dual mutual information estimatior, and introduce a communication-constrained multi-agent reinforcement learning framework, quantifying the impact of communica...