[2602.21020] Matching Multiple Experts: On the Exploitability of Multi-Agent Imitation Learning
Summary
This paper explores the challenges of multi-agent imitation learning (MA-IL), particularly the exploitability of learned policies in multi-agent environments and provides new theoretical insights into Nash equilibria.
Why It Matters
Understanding the limitations and potential of multi-agent imitation learning is crucial for developing robust AI systems. This research addresses gaps in existing literature regarding policy performance and Nash equilibrium characterizations, which are essential for applications in competitive environments.
Key Takeaways
- The paper presents impossibility results for learning low-exploitable policies in multi-agent settings.
- It introduces a new hardness result related to characterizing the Nash gap with fixed measure matching errors.
- The authors propose a method to overcome these challenges using strategic dominance assumptions.
- A Nash imitation gap is established under specific conditions, providing a framework for future research.
- The findings have implications for the design of AI systems that interact in competitive environments.
Computer Science > Machine Learning arXiv:2602.21020 (cs) [Submitted on 24 Feb 2026] Title:Matching Multiple Experts: On the Exploitability of Multi-Agent Imitation Learning Authors:Antoine Bergerault, Volkan Cevher, Negar Mehr View a PDF of the paper titled Matching Multiple Experts: On the Exploitability of Multi-Agent Imitation Learning, by Antoine Bergerault and 2 other authors View PDF HTML (experimental) Abstract:Multi-agent imitation learning (MA-IL) aims to learn optimal policies from expert demonstrations of interactions in multi-agent interactive domains. Despite existing guarantees on the performance of the resulting learned policies, characterizations of how far the learned polices are from a Nash equilibrium are missing for offline MA-IL. In this paper, we demonstrate impossibility and hardness results of learning low-exploitable policies in general $n$-player Markov Games. We do so by providing examples where even exact measure matching fails, and demonstrating a new hardness result on characterizing the Nash gap given a fixed measure matching error. We then show how these challenges can be overcome using strategic dominance assumptions on the expert equilibrium. Specifically, for the case of dominant strategy expert equilibria, assuming Behavioral Cloning error $\epsilon_{\text{BC}}$, this provides a Nash imitation gap of $\mathcal{O}\left(n\epsilon_{\text{BC}}/(1-\gamma)^2\right)$ for a discount factor $\gamma$. We generalize this result with a new notion o...