[2603.02783] Generative adversarial imitation learning for robot swarms: Learning from human demonstrations and trained policies
About this article
Abstract page for arXiv paper 2603.02783: Generative adversarial imitation learning for robot swarms: Learning from human demonstrations and trained policies
Computer Science > Robotics arXiv:2603.02783 (cs) [Submitted on 3 Mar 2026] Title:Generative adversarial imitation learning for robot swarms: Learning from human demonstrations and trained policies Authors:Mattes Kraus, Jonas Kuckling View a PDF of the paper titled Generative adversarial imitation learning for robot swarms: Learning from human demonstrations and trained policies, by Mattes Kraus and Jonas Kuckling View PDF Abstract:In imitation learning, robots are supposed to learn from demonstrations of the desired behavior. Most of the work in imitation learning for swarm robotics provides the demonstrations as rollouts of an existing policy. In this work, we provide a framework based on generative adversarial imitation learning that aims to learn collective behaviors from human demonstrations. Our framework is evaluated across six different missions, learning both from manual demonstrations and demonstrations derived from a PPO-trained policy. Results show that the imitation learning process is able to learn qualitatively meaningful behaviors that perform similarly well as the provided demonstrations. Additionally, we deploy the learned policies on a swarm of TurtleBot 4 robots in real-robot experiments. The exhibited behaviors preserved their visually recognizable character and their performance is comparable to the one achieved in simulation. Comments: Subjects: Robotics (cs.RO); Machine Learning (cs.LG); Multiagent Systems (cs.MA) Cite as: arXiv:2603.02783 [cs.RO] ...