[2602.23495] Uncertainty-aware Language Guidance for Concept Bottleneck Models
About this article
Abstract page for arXiv paper 2602.23495: Uncertainty-aware Language Guidance for Concept Bottleneck Models
Computer Science > Machine Learning arXiv:2602.23495 (cs) [Submitted on 26 Feb 2026] Title:Uncertainty-aware Language Guidance for Concept Bottleneck Models Authors:Yangyi Li, Mengdi Huai View a PDF of the paper titled Uncertainty-aware Language Guidance for Concept Bottleneck Models, by Yangyi Li and 1 other authors View PDF HTML (experimental) Abstract:Concept Bottleneck Models (CBMs) provide inherent interpretability by first mapping input samples to high-level semantic concepts, followed by a combination of these concepts for the final classification. However, the annotation of human-understandable concepts requires extensive expert knowledge and labor, constraining the broad adoption of CBMs. On the other hand, there are a few works that leverage the knowledge of large language models (LLMs) to construct concept bottlenecks. Nevertheless, they face two essential limitations: First, they overlook the uncertainty associated with the concepts annotated by LLMs and lack a valid mechanism to quantify uncertainty about the annotated concepts, increasing the risk of errors due to hallucinations from LLMs. Additionally, they fail to incorporate the uncertainty associated with these annotations into the learning process for concept bottleneck models. To address these limitations, we propose a novel uncertainty-aware CBM method, which not only rigorously quantifies the uncertainty of LLM-annotated concept labels with valid and distribution-free guarantees, but also incorporates...