[2603.23159] Conformal Cross-Modal Active Learning
About this article
Abstract page for arXiv paper 2603.23159: Conformal Cross-Modal Active Learning
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.23159 (cs) [Submitted on 24 Mar 2026] Title:Conformal Cross-Modal Active Learning Authors:Huy Hoang Nguyen, Cédric Jung, Shirin Salehi, Tobias Glück, Anke Schmeink, Andreas Kugi View a PDF of the paper titled Conformal Cross-Modal Active Learning, by Huy Hoang Nguyen and 5 other authors View PDF HTML (experimental) Abstract:Foundation models for vision have transformed visual recognition with powerful pretrained representations and strong zero-shot capabilities, yet their potential for data-efficient learning remains largely untapped. Active Learning (AL) aims to minimize annotation costs by strategically selecting the most informative samples for labeling, but existing methods largely overlook the rich multimodal knowledge embedded in modern vision-language models (VLMs). We introduce Conformal Cross-Modal Acquisition (CCMA), a novel AL framework that bridges vision and language modalities through a teacher-student architecture. CCMA employs a pretrained VLM as a teacher to provide semantically grounded uncertainty estimates, conformally calibrated to guide sample selection for a vision-only student model. By integrating multimodal conformal scoring with diversity-aware selection strategies, CCMA achieves superior data efficiency across multiple benchmarks. Our approach consistently outperforms state-of-the-art AL baselines, demonstrating clear advantages over methods relying solely on uncertainty or di...