[2602.11635] Do MLLMs Really Understand Space? A Mathematical Reasoning Evaluation
About this article
Abstract page for arXiv paper 2602.11635: Do MLLMs Really Understand Space? A Mathematical Reasoning Evaluation
Computer Science > Artificial Intelligence arXiv:2602.11635 (cs) [Submitted on 12 Feb 2026 (v1), last revised 8 Apr 2026 (this version, v2)] Title:Do MLLMs Really Understand Space? A Mathematical Reasoning Evaluation Authors:Shuo Lu, Jianjie Cheng, Yinuo Xu, Yongcan Yu, Lijun Sheng, Peijie Wang, Siru Jiang, Yongguan Hu, Run Ling, Yihua Shao, Ao Ma, Wei Feng, Lingxiao He, Meng Wang, Qianlong Xie, Xingxing Wang, Nicu Sebe, Ran He, Jian Liang View a PDF of the paper titled Do MLLMs Really Understand Space? A Mathematical Reasoning Evaluation, by Shuo Lu and 18 other authors View PDF HTML (experimental) Abstract:Multimodal large language models (MLLMs) have achieved strong performance on perception-oriented tasks, yet their ability to perform mathematical spatial reasoning, defined as the capacity to parse and manipulate two- and three-dimensional relations, remains unclear. Humans easily solve textbook-style spatial reasoning problems with over 95\% accuracy, but we find that most leading MLLMs fail to reach even 60\% on the same tasks. This striking gap highlights spatial reasoning as a fundamental weakness of current models. To investigate this gap, we present \emph{MathSpatial}, the first large-scale and systematic dataset resource dedicated to mathematical spatial reasoning in MLLMs. \emph{MathSpatial} provides two complementary subsets: (i)~\emph{MathSpatial-Bench}, a rigorously curated evaluation set of 2{,}000 problems spanning 3 categories and 11 subtypes, designed to...