[2506.13925] Segmenting Visuals With Querying Words: Language Anchors For Semi-Supervised Image Segmentation
About this article
Abstract page for arXiv paper 2506.13925: Segmenting Visuals With Querying Words: Language Anchors For Semi-Supervised Image Segmentation
Computer Science > Computer Vision and Pattern Recognition arXiv:2506.13925 (cs) [Submitted on 16 Jun 2025 (v1), last revised 22 Mar 2026 (this version, v4)] Title:Segmenting Visuals With Querying Words: Language Anchors For Semi-Supervised Image Segmentation Authors:Numair Nadeem, Saeed Anwar, Muhammad Hamza Asad, Abdul Bais View a PDF of the paper titled Segmenting Visuals With Querying Words: Language Anchors For Semi-Supervised Image Segmentation, by Numair Nadeem and 3 other authors View PDF HTML (experimental) Abstract:Vision Language Models (VLMs) provide rich semantic priors but are underexplored in Semi supervised Semantic Segmentation. Recent attempts to integrate VLMs to inject high level semantics overlook the semantic misalignment between visual and textual representations that arises from using domain invariant text embeddings without adapting them to dataset and image specific contexts. This lack of domain awareness, coupled with limited annotations, weakens the model semantic understanding by preventing effective vision language alignment. As a result, the model struggles with contextual reasoning, shows weak intra class discrimination, and confuses similar classes. To address these challenges, we propose Hierarchical Vision Language transFormer (HVLFormer), which achieves domain aware and domain robust alignment between visual and textual representations within a mask transformer architecture. Firstly, we transform text embeddings from pretrained VLMs into...