[2603.21577] Mind over Space: Can Multimodal Large Language Models Mentally Navigate?
About this article
Abstract page for arXiv paper 2603.21577: Mind over Space: Can Multimodal Large Language Models Mentally Navigate?
Computer Science > Artificial Intelligence arXiv:2603.21577 (cs) [Submitted on 23 Mar 2026] Title:Mind over Space: Can Multimodal Large Language Models Mentally Navigate? Authors:Qihui Zhu, Shouwei Ruan, Xiao Yang, Hao Jiang, Yao Huang, Shiji Zhao, Hanwei Fan, Hang Su, Xingxing Wei View a PDF of the paper titled Mind over Space: Can Multimodal Large Language Models Mentally Navigate?, by Qihui Zhu and 8 other authors View PDF HTML (experimental) Abstract:Despite the widespread adoption of MLLMs in embodied agents, their capabilities remain largely confined to reactive planning from immediate observations, consistently failing in spatial reasoning across extensive spatiotemporal scales. Cognitive science reveals that Biological Intelligence (BI) thrives on "mental navigation": the strategic construction of spatial representations from experience and the subsequent mental simulation of paths prior to action. To bridge the gap between AI and BI, we introduce Video2Mental, a pioneering benchmark for evaluating the mental navigation capabilities of MLLMs. The task requires constructing hierarchical cognitive maps from long egocentric videos and generating landmark-based path plans step by step, with planning accuracy verified through simulator-based physical interaction. Our benchmarking results reveal that mental navigation capability does not naturally emerge from standard pre-training. Frontier MLLMs struggle profoundly with zero-shot structured spatial representation, and t...