[2604.09532] Seeing is Believing: Robust Vision-Guided Cross-Modal Prompt Learning under Label Noise
About this article
Abstract page for arXiv paper 2604.09532: Seeing is Believing: Robust Vision-Guided Cross-Modal Prompt Learning under Label Noise
Computer Science > Computer Vision and Pattern Recognition arXiv:2604.09532 (cs) [Submitted on 10 Apr 2026] Title:Seeing is Believing: Robust Vision-Guided Cross-Modal Prompt Learning under Label Noise Authors:Zibin Geng, Xuefeng Jiang, Jia Li, Zheng Li, Tian Wen, Lvhua Wu, Sheng Sun, Yuwei Wang, Min Liu View a PDF of the paper titled Seeing is Believing: Robust Vision-Guided Cross-Modal Prompt Learning under Label Noise, by Zibin Geng and 8 other authors View PDF HTML (experimental) Abstract:Prompt learning is a parameter-efficient approach for vision-language models, yet its robustness under label noise is less investigated. Visual content contains richer and more reliable semantic information, which remains more robust under label noise. However, the prompt itself is highly susceptible to label noise. Motivated by this intuition, we propose VisPrompt, a lightweight and robust vision-guided prompt learning framework for noisy-label settings. Specifically, we exploit a cross-modal attention mechanism to reversely inject visual semantics into prompt representations. This enables the prompt tokens to selectively aggregate visual information relevant to the current sample, thereby improving robustness by anchoring prompt learning to stable instance-level visual evidence and reducing the influence of noisy supervision. To address the instability caused by using the same way of injecting visual information for all samples, despite differences in the quality of their visual cue...