[2604.06691] KD-MARL: Resource-Aware Knowledge Distillation in Multi-Agent Reinforcement Learning
About this article
Abstract page for arXiv paper 2604.06691: KD-MARL: Resource-Aware Knowledge Distillation in Multi-Agent Reinforcement Learning
Computer Science > Artificial Intelligence arXiv:2604.06691 (cs) [Submitted on 8 Apr 2026] Title:KD-MARL: Resource-Aware Knowledge Distillation in Multi-Agent Reinforcement Learning Authors:Monirul Islam Pavel, Siyi Hu, Muhammad Anwar Masum, Mahardhika Pratama, Ryszard Kowalczyk, Zehong Jimmy Cao View a PDF of the paper titled KD-MARL: Resource-Aware Knowledge Distillation in Multi-Agent Reinforcement Learning, by Monirul Islam Pavel and 5 other authors View PDF HTML (experimental) Abstract:Real world deployment of multi agent reinforcement learning MARL systems is fundamentally constrained by limited compute memory and inference time. While expert policies achieve high performance they rely on costly decision cycles and large scale models that are impractical for edge devices or embedded platforms. Knowledge distillation KD offers a promising path toward resource aware execution but existing KD methods in MARL focus narrowly on action imitation often neglecting coordination structure and assuming uniform agent capabilities. We propose resource aware Knowledge Distillation for Multi Agent Reinforcement Learning KD MARL a two stage framework that transfers coordinated behavior from a centralized expert to lightweight decentralized student agents. The student policies are trained without a critic relying instead on distilled advantage signals and structured policy supervision to preserve coordination under heterogeneous and limited observations. Our approach transfers both a...