[2602.23653] ProtoDCS: Towards Robust and Efficient Open-Set Test-Time Adaptation for Vision-Language Models
About this article
Abstract page for arXiv paper 2602.23653: ProtoDCS: Towards Robust and Efficient Open-Set Test-Time Adaptation for Vision-Language Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.23653 (cs) [Submitted on 27 Feb 2026] Title:ProtoDCS: Towards Robust and Efficient Open-Set Test-Time Adaptation for Vision-Language Models Authors:Wei Luo, Yangfan Ou, Jin Deng, Zeshuai Deng, Xiquan Yan, Zhiquan Wen, Mingkui Tan View a PDF of the paper titled ProtoDCS: Towards Robust and Efficient Open-Set Test-Time Adaptation for Vision-Language Models, by Wei Luo and 6 other authors View PDF Abstract:Large-scale Vision-Language Models (VLMs) exhibit strong zero-shot recognition, yet their real-world deployment is challenged by distribution shifts. While Test-Time Adaptation (TTA) can mitigate this, existing VLM-based TTA methods operate under a closed-set assumption, failing in open-set scenarios where test streams contain both covariate-shifted in-distribution (csID) and out-of-distribution (csOOD) data. This leads to a critical difficulty: the model must discriminate unknown csOOD samples to avoid interference while simultaneously adapting to known csID classes for accuracy. Current open-set TTA (OSTTA) methods rely on hard thresholds for separation and entropy minimization for adaptation. These strategies are brittle, often misclassifying ambiguous csOOD samples and inducing overconfident predictions, and their parameter-update mechanism is computationally prohibitive for VLMs. To address these limitations, we propose Prototype-based Double-Check Separation (ProtoDCS), a robust framework for OSTTA ...