[2509.25848] More Thought, Less Accuracy? On the Dual Nature of Reasoning in Vision-Language Models
About this article
Abstract page for arXiv paper 2509.25848: More Thought, Less Accuracy? On the Dual Nature of Reasoning in Vision-Language Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2509.25848 (cs) [Submitted on 30 Sep 2025 (v1), last revised 29 Mar 2026 (this version, v3)] Title:More Thought, Less Accuracy? On the Dual Nature of Reasoning in Vision-Language Models Authors:Xinyu Tian, Shu Zou, Zhaoyuan Yang, Mengqi He, Fabian Waschkowski, Lukas Wesemann, Peter Tu, Jing Zhang View a PDF of the paper titled More Thought, Less Accuracy? On the Dual Nature of Reasoning in Vision-Language Models, by Xinyu Tian and 7 other authors View PDF Abstract:Reasoning has emerged as a pivotal capability in Large Language Models (LLMs). Through Reinforcement Learning (RL), typically Group Relative Policy Optimization (GRPO), these models are able to solve complex tasks such as mathematics and code generation. Building on these advances, recent research has sought to extend reasoning to Vision-Language Models (VLMs), yielding promising results across diverse visual tasks. Despite this progress, our study uncovers the dual nature of multimodal reasoning: while it substantially enhances logical inference and facilitates performance on challenging problems, it may gradually impair perceptual grounding, leading to recognition failures on otherwise basic visual questions. Through further analysis, we attribute this phenomenon to visual forgetting, wherein prolonged reasoning causes the model to increasingly disregard visual input. To address this, we propose Vision-Anchored Policy Optimization (VAPO), a simple...