[2603.00152] Dr. Seg: Revisiting GRPO Training for Visual Large Language Models through Perception-Oriented Design
About this article
Abstract page for arXiv paper 2603.00152: Dr. Seg: Revisiting GRPO Training for Visual Large Language Models through Perception-Oriented Design
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.00152 (cs) [Submitted on 25 Feb 2026] Title:Dr. Seg: Revisiting GRPO Training for Visual Large Language Models through Perception-Oriented Design Authors:Haoxiang Sun, Tao Wang, Chenwei Tang, Li Yuan, Jiancheng Lv View a PDF of the paper titled Dr. Seg: Revisiting GRPO Training for Visual Large Language Models through Perception-Oriented Design, by Haoxiang Sun and 4 other authors View PDF HTML (experimental) Abstract:Following the success of Group Relative Policy Optimization (GRPO) in foundation LLMs, an increasing number of works have sought to adapt GRPO to Visual Large Language Models (VLLMs) for visual perception tasks (e.g., detection and segmentation). However, much of this line of research rests on a long-standing yet unexamined assumption: training paradigms developed for language reasoning can be transferred seamlessly to visual perception. Our experiments show that this assumption is not valid, revealing intrinsic differences between reasoning-oriented and perception-oriented settings. Using reasoning segmentation as a representative case, we surface two overlooked factors: (i) the need for a broader output space, and (ii) the importance of fine-grained, stable rewards. Building on these observations, we propose Dr.~Seg, a simple, plug-and-play GRPO-based framework consisting of a Look-to-Confirm mechanism and a Distribution-Ranked Reward module, requiring no architectural modifications and i...