[2605.07141] Qwen3-VL-Seg: Unlocking Open-World Referring Segmentation with Vision-Language Grounding
About this article
Abstract page for arXiv paper 2605.07141: Qwen3-VL-Seg: Unlocking Open-World Referring Segmentation with Vision-Language Grounding
Computer Science > Computer Vision and Pattern Recognition arXiv:2605.07141 (cs) [Submitted on 8 May 2026] Title:Qwen3-VL-Seg: Unlocking Open-World Referring Segmentation with Vision-Language Grounding Authors:Yuan Yao, Qiushi Yang, Humen Zhong, Jiangning Wei, Yifang Men, Shuai Bai, Miaomiao Cui, Zhibo Yang View a PDF of the paper titled Qwen3-VL-Seg: Unlocking Open-World Referring Segmentation with Vision-Language Grounding, by Yuan Yao and 7 other authors View PDF HTML (experimental) Abstract:Open-world referring segmentation requires grounding unconstrained language expressions to precise pixel-level regions. Existing multimodal large language models (MLLMs) exhibit strong open-world visual grounding, but their outputs remain limited to sparse bounding-box coordinates and are insufficient for dense visual prediction. Recent MLLM-based segmentation methods either directly predict sparse contour coordinates, struggling to reconstruct continuous object boundaries, or rely on external segmentation foundation models such as the Segment Anything Model (SAM), introducing substantial architectural and deployment overhead. We present Qwen3-VL-Seg, a parameter-efficient framework that treats the MLLM-predicted box as a semantically grounded structural prior and decodes it into pixel-level referring segmentation. At its core, a lightweight box-guided mask decoder combines multi-scale spatial feature injection, spatial-semantic query construction, box-guided high-resolution pixel f...