[2603.25629] LanteRn: Latent Visual Structured Reasoning
About this article
Abstract page for arXiv paper 2603.25629: LanteRn: Latent Visual Structured Reasoning
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.25629 (cs) [Submitted on 26 Mar 2026] Title:LanteRn: Latent Visual Structured Reasoning Authors:André G. Viveiros, Nuno Gonçalves, Matthias Lindemann, André Martins View a PDF of the paper titled LanteRn: Latent Visual Structured Reasoning, by Andr\'e G. Viveiros and 3 other authors View PDF HTML (experimental) Abstract:While language reasoning models excel in many tasks, visual reasoning remains challenging for current large multimodal models (LMMs). As a result, most LMMs default to verbalizing perceptual content into text, a strong limitation for tasks requiring fine-grained spatial and visual understanding. While recent approaches take steps toward thinking with images by invoking tools or generating intermediate images, they either rely on external modules, or incur unnecessary computation by reasoning directly in pixel space. In this paper, we introduce LanteRn, a framework that enables LMMs to interleave language with compact latent visual representations, allowing visual reasoning to occur directly in latent space. LanteRn augments a vision-language transformer with the ability to generate and attend to continuous visual thought embeddings during inference. We train the model in two stages: supervised fine-tuning to ground visual features in latent states, followed by reinforcement learning to align latent reasoning with task-level utility. We evaluate LanteRn on three perception-centric benchmar...