[2509.11481] RAPTOR: A Foundation Policy for Quadrotor Control
About this article
Abstract page for arXiv paper 2509.11481: RAPTOR: A Foundation Policy for Quadrotor Control
Computer Science > Robotics arXiv:2509.11481 (cs) [Submitted on 15 Sep 2025 (v1), last revised 6 Apr 2026 (this version, v2)] Title:RAPTOR: A Foundation Policy for Quadrotor Control Authors:Jonas Eschmann, Dario Albani, Giuseppe Loianno View a PDF of the paper titled RAPTOR: A Foundation Policy for Quadrotor Control, by Jonas Eschmann and 2 other authors View PDF HTML (experimental) Abstract:Humans are remarkably data-efficient when adapting to new unseen conditions, like driving a new car. In contrast, modern robotic control systems, like neural network policies trained using Reinforcement Learning (RL), are highly specialized for single environments. Because of this overfitting, they are known to break down even under small differences like the Simulation-to-Reality (Sim2Real) gap and require system identification and retraining for even minimal changes to the system. In this work, we present RAPTOR, a method for training a highly adaptive foundation policy for quadrotor control. Our method enables training a single, end-to-end neural-network policy to control a wide variety of quadrotors. We test 10 different real quadrotors from 32 g to 2.4 kg that also differ in motor type (brushed vs. brushless), frame type (soft vs. rigid), propeller type (2/3/4-blade), and flight controller (PX4/Betaflight/Crazyflie/M5StampFly). We find that a tiny, three-layer policy with only 2084 parameters is sufficient for zero-shot adaptation to a wide variety of platforms. The adaptation thr...