[2604.09349] Visually-Guided Policy Optimization for Multimodal Reasoning
About this article
Abstract page for arXiv paper 2604.09349: Visually-Guided Policy Optimization for Multimodal Reasoning
Computer Science > Computer Vision and Pattern Recognition arXiv:2604.09349 (cs) [Submitted on 10 Apr 2026] Title:Visually-Guided Policy Optimization for Multimodal Reasoning Authors:Zengbin Wang, Feng Xiong, Liang Lin, Xuecai Hu, Yong Wang, Yanlin Wang, Man Zhang, Xiangxiang Chu View a PDF of the paper titled Visually-Guided Policy Optimization for Multimodal Reasoning, by Zengbin Wang and 7 other authors View PDF HTML (experimental) Abstract:Reinforcement learning with verifiable rewards (RLVR) has significantly advanced the reasoning ability of vision-language models (VLMs). However, the inherent text-dominated nature of VLMs often leads to insufficient visual faithfulness, characterized by sparse attention activation to visual tokens. More importantly, our empirical analysis reveals that temporal visual forgetting along reasoning steps exacerbates this deficiency. To bridge this gap, we propose Visually-Guided Policy Optimization (VGPO), a novel framework to reinforce visual focus during policy optimization. Specifically, VGPO initially introduces a Visual Attention Compensation mechanism that leverages visual similarity to localize and amplify visual cues, while progressively elevating visual expectations in later steps to counteract visual forgetting. Building on this mechanism, we implement a dual-grained advantage re-weighting strategy: the intra-trajectory level highlights tokens exhibiting relatively high visual activation, while the inter-trajectory level priori...