[2603.26779] Limits of Imagery Reasoning in Frontier LLM Models
About this article
Abstract page for arXiv paper 2603.26779: Limits of Imagery Reasoning in Frontier LLM Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.26779 (cs) [Submitted on 25 Mar 2026] Title:Limits of Imagery Reasoning in Frontier LLM Models Authors:Sergio Y. Hayashi, Nina S. T. Hirata View a PDF of the paper titled Limits of Imagery Reasoning in Frontier LLM Models, by Sergio Y. Hayashi and 1 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have demonstrated impressive reasoning capabilities, yet they struggle with spatial tasks that require mental simulation, such as mental rotation. This paper investigates whether equipping an LLM with an external ``Imagery Module'' -- a tool capable of rendering and rotating 3D models -- can bridge this gap, functioning as a ``cognitive prosthetic.'' We conducted experiments using a dual-module architecture in which a reasoning module (an MLLM) interacts with an imagery module on 3D model rotation tasks. Performance was lower than expected, with accuracy reaching at most 62.5%. Further investigation suggests that even when the burden of maintaining and manipulating a holistic 3D state is outsourced, the system still fails. This reveals that current frontier models lack the foundational visual-spatial primitives required to interface with imagery. Specifically, they lack: (1) the low-level sensitivity to extract spatial signals such as (a) depth, (b) motion, and (c) short-horizon dynamic prediction; and (2) the capacity to reason contemplatively over images, dynamically shifting v...