[2603.02557] CAPT: Confusion-Aware Prompt Tuning for Reducing Vision-Language Misalignment
About this article
Abstract page for arXiv paper 2603.02557: CAPT: Confusion-Aware Prompt Tuning for Reducing Vision-Language Misalignment
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.02557 (cs) [Submitted on 3 Mar 2026] Title:CAPT: Confusion-Aware Prompt Tuning for Reducing Vision-Language Misalignment Authors:Maoyuan Shao, Yutong Gao, Xinyang Huang, Chuang Zhu, Lijuan Sun, Guoshun Nan View a PDF of the paper titled CAPT: Confusion-Aware Prompt Tuning for Reducing Vision-Language Misalignment, by Maoyuan Shao and 5 other authors View PDF HTML (experimental) Abstract:Vision-language models like CLIP have achieved remarkable progress in cross-modal representation learning, yet suffer from systematic misclassifications among visually and semantically similar categories. We observe that such confusion patterns are not random but persistently occur between specific category pairs, revealing the model's intrinsic bias and limited fine-grained discriminative ability. To address this, we propose CAPT, a Confusion-Aware Prompt Tuning framework that enables models to learn from their own misalignment. Specifically, we construct a Confusion Bank to explicitly model stable confusion relationships across categories and misclassified samples. On this basis, we introduce a Semantic Confusion Miner (SEM) to capture global inter-class confusion through semantic difference and commonality prompts, and a Sample Confusion Miner (SAM) to retrieve representative misclassified instances from the bank and capture sample-level cues through a Diff-Manner Adapter that integrates global and local contexts. To f...