[2506.03135] OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models
About this article
Abstract page for arXiv paper 2506.03135: OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2506.03135 (cs) [Submitted on 3 Jun 2025 (v1), last revised 28 Feb 2026 (this version, v3)] Title:OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models Authors:Mengdi Jia, Zekun Qi, Shaochen Zhang, Wenyao Zhang, Xinqiang Yu, Jiawei He, He Wang, Li Yi View a PDF of the paper titled OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models, by Mengdi Jia and 7 other authors View PDF HTML (experimental) Abstract:Spatial reasoning is a key aspect of cognitive psychology and remains a bottleneck for current vision-language models (VLMs). While extensive research has aimed to evaluate or improve VLMs' understanding of basic spatial relations, such as distinguishing left from right, near from far, and object counting, these tasks cover only the most elementary layer of spatial reasoning and are largely approaching saturation in the latest reasoning models. In this work, we introduce OmniSpatial, a comprehensive and challenging benchmark for spatial reasoning, grounded in cognitive psychology. OmniSpatial covers four major categories: dynamic reasoning, complex spatial logic, spatial interaction, and perspective-taking, with 50 fine-grained subcategories. Through careful manual annotation, we construct over 8.4K question-answer pairs. Extensive experiments show that both open- and closed-source VLMs exhibit significant limitations in comprehensive sp...