[2511.13719] Scaling Spatial Intelligence with Multimodal Foundation Models
About this article
Abstract page for arXiv paper 2511.13719: Scaling Spatial Intelligence with Multimodal Foundation Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2511.13719 (cs) [Submitted on 17 Nov 2025 (v1), last revised 28 Mar 2026 (this version, v4)] Title:Scaling Spatial Intelligence with Multimodal Foundation Models Authors:Zhongang Cai, Ruisi Wang, Chenyang Gu, Fanyi Pu, Junxiang Xu, Yubo Wang, Wanqi Yin, Zhitao Yang, Chen Wei, Qingping Sun, Tongxi Zhou, Jiaqi Li, Hui En Pang, Oscar Qian, Yukun Wei, Zhiqian Lin, Xuanke Shi, Kewang Deng, Xiaoyang Han, Zukai Chen, Xiangyu Fan, Hanming Deng, Lewei Lu, Liang Pan, Bo Li, Ziwei Liu, Quan Wang, Dahua Lin, Lei Yang View a PDF of the paper titled Scaling Spatial Intelligence with Multimodal Foundation Models, by Zhongang Cai and 28 other authors View PDF Abstract:Despite remarkable progress, multimodal foundation models still exhibit surprising deficiencies in spatial intelligence. In this work, we explore scaling up multimodal foundation models to cultivate spatial intelligence within the SenseNova-SI family, built upon established multimodal foundations including visual understanding models (i.e., Qwen3-VL and InternVL3) and unified understanding and generation models (i.e., Bagel). We take a principled approach to constructing high-performing and robust spatial intelligence by systematically curating SenseNova-SI-8M: eight million diverse data samples under a rigorous taxonomy of spatial capabilities. SenseNova-SI demonstrates unprecedented performance across a broad range of spatial intelligence benchmarks: 68.8% on...