[2508.02948] Sample-Efficient Distributionally Robust Multi-Agent Reinforcement Learning via Online Interaction
About this article
Abstract page for arXiv paper 2508.02948: Sample-Efficient Distributionally Robust Multi-Agent Reinforcement Learning via Online Interaction
Computer Science > Machine Learning arXiv:2508.02948 (cs) [Submitted on 4 Aug 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:Sample-Efficient Distributionally Robust Multi-Agent Reinforcement Learning via Online Interaction Authors:Zain Ulabedeen Farhat, Debamita Ghosh, George K. Atia, Yue Wang View a PDF of the paper titled Sample-Efficient Distributionally Robust Multi-Agent Reinforcement Learning via Online Interaction, by Zain Ulabedeen Farhat and 3 other authors View PDF Abstract:Well-trained multi-agent systems can fail when deployed in real-world environments due to model mismatches between the training and deployment environments, caused by environment uncertainties including noise or adversarial attacks. Distributionally Robust Markov Games (DRMGs) enhance system resilience by optimizing for worst-case performance over a defined set of environmental uncertainties. However, current methods are limited by their dependence on simulators or large offline datasets, which are often unavailable. This paper pioneers the study of online learning in DRMGs, where agents learn directly from environmental interactions without prior data. We introduce the Multiplayer Optimistic Robust Nash Value Iteration (MORNAVI) algorithm and provide the first provable guarantees for this setting. Our theoretical analysis demonstrates that the algorithm achieves low regret and efficiently finds the optimal robust policy for uncertainty sets measured by Total Variation divergenc...