[2511.17411] SPEAR-1: Scaling Beyond Robot Demonstrations via 3D Understanding
About this article
Abstract page for arXiv paper 2511.17411: SPEAR-1: Scaling Beyond Robot Demonstrations via 3D Understanding
Computer Science > Robotics arXiv:2511.17411 (cs) [Submitted on 21 Nov 2025 (v1), last revised 27 Apr 2026 (this version, v2)] Title:SPEAR-1: Scaling Beyond Robot Demonstrations via 3D Understanding Authors:Nikolay Nikolov, Giuliano Albanese, Sombit Dey, Aleksandar Yanev, Luc Van Gool, Jan-Nico Zaech, Danda Pani Paudel View a PDF of the paper titled SPEAR-1: Scaling Beyond Robot Demonstrations via 3D Understanding, by Nikolay Nikolov and 6 other authors View PDF HTML (experimental) Abstract:Robotic Foundation Models (RFMs) hold great promise as generalist, end-to-end systems for robot control. Yet their ability to generalize across new environments, tasks, and embodiments remains limited. We argue that a major bottleneck lies in their foundations: most RFMs are built by fine-tuning internet-pretrained Vision-Language Models (VLMs). However, these VLMs are trained on 2D image-language tasks and lack the 3D spatial reasoning inherently required for embodied control in the 3D world. Bridging this gap directly with large-scale robotic data is costly and difficult to scale. Instead, we propose to enrich easy-to-collect non-robotic image data with 3D annotations and enhance a pretrained VLM with 3D understanding capabilities. Following this strategy, we train SPEAR-VLM, a 3D-aware VLM that infers object coordinates in 3D space from a single 2D image. Building on SPEAR-VLM, we introduce our main contribution, $~\textbf{SPEAR-1}$: a robotic foundation model that integrates grounde...