[2404.10976] Group-Aware Coordination Graph for Multi-Agent Reinforcement Learning
About this article
Abstract page for arXiv paper 2404.10976: Group-Aware Coordination Graph for Multi-Agent Reinforcement Learning
Computer Science > Machine Learning arXiv:2404.10976 (cs) [Submitted on 17 Apr 2024 (v1), last revised 10 Apr 2026 (this version, v4)] Title:Group-Aware Coordination Graph for Multi-Agent Reinforcement Learning Authors:Wei Duan, Jie Lu, Junyu Xuan View a PDF of the paper titled Group-Aware Coordination Graph for Multi-Agent Reinforcement Learning, by Wei Duan and 2 other authors View PDF HTML (experimental) Abstract:Cooperative Multi-Agent Reinforcement Learning (MARL) necessitates seamless collaboration among agents, often represented by an underlying relation graph. Existing methods for learning this graph primarily focus on agent-pair relations, neglecting higher-order relationships. While several approaches attempt to extend cooperation modelling to encompass behaviour similarities within groups, they commonly fall short in concurrently learning the latent graph, thereby constraining the information exchange among partially observed agents. To overcome these limitations, we present a novel approach to infer the Group-Aware Coordination Graph (GACG), which is designed to capture both the cooperation between agent pairs based on current observations and group-level dependencies from behaviour patterns observed across trajectories. This graph is further used in graph convolution for information exchange between agents during decision-making. To further ensure behavioural consistency among agents within the same group, we introduce a group distance loss, which promotes gro...