[2510.16187] Zero-Shot Coordination in Ad Hoc Teams with Generalized Policy Improvement and Difference Rewards
About this article
Abstract page for arXiv paper 2510.16187: Zero-Shot Coordination in Ad Hoc Teams with Generalized Policy Improvement and Difference Rewards
Computer Science > Multiagent Systems arXiv:2510.16187 (cs) [Submitted on 17 Oct 2025 (v1), last revised 31 Mar 2026 (this version, v2)] Title:Zero-Shot Coordination in Ad Hoc Teams with Generalized Policy Improvement and Difference Rewards Authors:Rupal Nigam, Niket Parikh, Hamid Osooli, Mikihisa Yuasa, Jacob Heglund, Huy T. Tran View a PDF of the paper titled Zero-Shot Coordination in Ad Hoc Teams with Generalized Policy Improvement and Difference Rewards, by Rupal Nigam and 5 other authors View PDF HTML (experimental) Abstract:Real-world multi-agent systems may require ad hoc teaming, where an agent must coordinate with other previously unseen teammates to solve a task in a zero-shot manner. Prior work often either selects a pretrained policy based on an inferred model of the new teammates or pretrains a single policy that is robust to potential teammates. Instead, we propose to leverage all pretrained policies in a zero-shot transfer setting. We formalize this problem as an ad hoc multi-agent Markov decision process and present a solution that uses two key ideas, generalized policy improvement and difference rewards, for efficient and effective knowledge transfer between different teams. We empirically demonstrate that our algorithm, Generalized Policy improvement for Ad hoc Teaming (GPAT), successfully enables zero-shot transfer to new teams in three simulated environments: cooperative foraging, predator-prey, and Overcooked. We also demonstrate our algorithm in a rea...