[2603.28555] Domain-Invariant Prompt Learning for Vision-Language Models
About this article
Abstract page for arXiv paper 2603.28555: Domain-Invariant Prompt Learning for Vision-Language Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.28555 (cs) [Submitted on 30 Mar 2026] Title:Domain-Invariant Prompt Learning for Vision-Language Models Authors:Arsham Gholamzadeh Khoee, Yinan Yu, Robert Feldt View a PDF of the paper titled Domain-Invariant Prompt Learning for Vision-Language Models, by Arsham Gholamzadeh Khoee and 2 other authors View PDF HTML (experimental) Abstract:Large pre-trained vision-language models like CLIP have transformed computer vision by aligning images and text in a shared feature space, enabling robust zero-shot transfer via prompting. Soft-prompting, such as Context Optimization (CoOp), effectively adapts these models for downstream recognition tasks by learning a set of context vectors. However, CoOp lacks explicit mechanisms for handling domain shifts across unseen distributions. To address this, we propose Domain-invariant Context Optimization (DiCoOp), an extension of CoOp optimized for domain generalization. By employing an adversarial training approach, DiCoOp forces the model to learn domain-invariant prompts while preserving discriminative power for classification. Experimental results show that DiCoOp consistently surpasses CoOp in domain generalization tasks across diverse visual domains. Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.28555 [cs.CV] (or arXiv:2603.28555v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2603.28555 Fo...