[2602.20901] SpatiaLQA: A Benchmark for Evaluating Spatial Logical Reasoning in Vision-Language Models
Summary
The paper introduces SpatiaLQA, a benchmark for evaluating spatial logical reasoning in Vision-Language Models (VLMs), highlighting their limitations in complex real-world scenarios.
Why It Matters
As VLMs become more prevalent in applications, understanding their reasoning capabilities is crucial. SpatiaLQA addresses a significant gap in evaluating how these models handle spatial relationships and logical dependencies, which is vital for their effective deployment in real-world tasks.
Key Takeaways
- SpatiaLQA consists of 9,605 question-answer pairs from 241 indoor scenes.
- Current VLMs struggle with spatial logical reasoning, indicating a need for improvement.
- The proposed recursive scene graph assisted reasoning method enhances VLM performance in spatial tasks.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.20901 (cs) [Submitted on 24 Feb 2026] Title:SpatiaLQA: A Benchmark for Evaluating Spatial Logical Reasoning in Vision-Language Models Authors:Yuechen Xie, Xiaoyan Zhang, Yicheng Shan, Hao Zhu, Rui Tang, Rong Wei, Mingli Song, Yuanyu Wan, Jie Song View a PDF of the paper titled SpatiaLQA: A Benchmark for Evaluating Spatial Logical Reasoning in Vision-Language Models, by Yuechen Xie and 8 other authors View PDF HTML (experimental) Abstract:Vision-Language Models (VLMs) have been increasingly applied in real-world scenarios due to their outstanding understanding and reasoning capabilities. Although VLMs have already demonstrated impressive capabilities in common visual question answering and logical reasoning, they still lack the ability to make reasonable decisions in complex real-world environments. We define this ability as spatial logical reasoning, which not only requires understanding the spatial relationships among objects in complex scenes, but also the logical dependencies between steps in multi-step tasks. To bridge this gap, we introduce Spatial Logical Question Answering (SpatiaLQA), a benchmark designed to evaluate the spatial logical reasoning capabilities of VLMs. SpatiaLQA consists of 9,605 question answer pairs derived from 241 real-world indoor scenes. We conduct extensive experiments on 41 mainstream VLMs, and the results show that even the most advanced models still struggle with spatial...