[2508.13998] Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation
About this article
Abstract page for arXiv paper 2508.13998: Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation
Computer Science > Robotics arXiv:2508.13998 (cs) [Submitted on 19 Aug 2025 (v1), last revised 6 Apr 2026 (this version, v2)] Title:Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation Authors:Yifu Yuan, Haiqin Cui, Yaoting Huang, Yibin Chen, Fei Ni, Zibin Dong, Pengyi Li, Yan Zheng, Hongyao Tang, Jianye Hao View a PDF of the paper titled Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation, by Yifu Yuan and 9 other authors View PDF HTML (experimental) Abstract:Generalization in embodied AI is hindered by the "seeing-to-doing gap," which stems from data scarcity and embodiment heterogeneity. To address this, we pioneer "pointing" as a unified, embodiment-agnostic intermediate representation, defining four core embodied pointing abilities that bridge high-level vision-language comprehension with low-level action primitives. We introduce Embodied-R1, a 3B Vision-Language Model (VLM) specifically designed for embodied reasoning and pointing. We use a wide range of embodied and general visual reasoning datasets as sources to construct a large-scale dataset, Embodied-Points-200K, which supports key embodied pointing capabilities. We then train Embodied-R1 using a two-stage Reinforced Fine-tuning (RFT) curriculum with a specialized multi-task reward design. Embodied-R1 achieves state-of-the-art performance on 11 embodied spatial and pointing benchmarks. Critically, it demonstrates robust zero-shot generalization by achieving a 56.2% ...