[2602.22810] Multi-agent imitation learning with function approximation: Linear Markov games and beyond
Summary
This article presents a theoretical analysis of multi-agent imitation learning (MAIL) in linear Markov games, introducing a novel interactive algorithm that improves sample efficiency and performance in games like Tic-Tac-Toe and Connect4.
Why It Matters
The research addresses a significant gap in the understanding of multi-agent systems by providing a new framework for imitation learning. This has implications for developing more efficient algorithms in AI, particularly in environments where agents must learn from each other, enhancing the capabilities of AI in complex decision-making scenarios.
Key Takeaways
- Introduces a new concentrability coefficient for linear Markov games.
- Presents a computationally efficient interactive MAIL algorithm.
- Demonstrates improved performance over traditional behavior cloning in specific games.
- Highlights the importance of feature-level analysis in multi-agent settings.
- Sets a foundation for future research in interactive learning among agents.
Computer Science > Machine Learning arXiv:2602.22810 (cs) [Submitted on 26 Feb 2026] Title:Multi-agent imitation learning with function approximation: Linear Markov games and beyond Authors:Luca Viano, Till Freihaut, Emanuele Nevali, Volkan Cevher, Matthieu Geist, Giorgia Ramponi View a PDF of the paper titled Multi-agent imitation learning with function approximation: Linear Markov games and beyond, by Luca Viano and 5 other authors View PDF HTML (experimental) Abstract:In this work, we present the first theoretical analysis of multi-agent imitation learning (MAIL) in linear Markov games where both the transition dynamics and each agent's reward function are linear in some given features. We demonstrate that by leveraging this structure, it is possible to replace the state-action level "all policy deviation concentrability coefficient" (Freihaut et al., arXiv:2510.09325) with a concentrability coefficient defined at the feature level which can be much smaller than the state-action analog when the features are informative about states' similarity. Furthermore, to circumvent the need for any concentrability coefficient, we turn to the interactive setting. We provide the first, computationally efficient, interactive MAIL algorithm for linear Markov games and show that its sample complexity depends only on the dimension of the feature map $d$. Building on these theoretical findings, we propose a deep MAIL interactive algorithm which clearly outperforms BC on games such as Tic...