[2603.04002] Discriminative Perception via Anchored Description for Reasoning Segmentation
About this article
Abstract page for arXiv paper 2603.04002: Discriminative Perception via Anchored Description for Reasoning Segmentation
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.04002 (cs) [Submitted on 4 Mar 2026] Title:Discriminative Perception via Anchored Description for Reasoning Segmentation Authors:Tao Yang, Qing Zhou, Yanliang Li, Qi Wang View a PDF of the paper titled Discriminative Perception via Anchored Description for Reasoning Segmentation, by Tao Yang and 3 other authors View PDF HTML (experimental) Abstract:Reasoning segmentation increasingly employs reinforcement learning to generate explanatory reasoning chains that guide Multimodal Large Language Models. While these geometric rewards are primarily confined to guiding the final localization, they are incapable of discriminating whether the reasoning process remains anchored on the referred region or strays into irrelevant context. Lacking this discriminative guidance, the model's reasoning often devolves into unfocused and verbose chains that ultimately fail to disambiguate and perceive the target in complex scenes. This suggests a need to complement the RL objective with Discriminative Perception, an ability to actively distinguish a target from its context. To realize this, we propose DPAD to compel the model to generate a descriptive caption of the referred object, which is then used to explicitly discriminate by contrasting the caption's semantic relevance to the referred object against the wider context. By optimizing for this discriminative capability, the model is forced to focus on the unique attributes...