[2602.23495] Uncertainty-aware Language Guidance for Concept Bottleneck Models

[2602.23495] Uncertainty-aware Language Guidance for Concept Bottleneck Models

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2602.23495: Uncertainty-aware Language Guidance for Concept Bottleneck Models

Computer Science > Machine Learning arXiv:2602.23495 (cs) [Submitted on 26 Feb 2026] Title:Uncertainty-aware Language Guidance for Concept Bottleneck Models Authors:Yangyi Li, Mengdi Huai View a PDF of the paper titled Uncertainty-aware Language Guidance for Concept Bottleneck Models, by Yangyi Li and 1 other authors View PDF HTML (experimental) Abstract:Concept Bottleneck Models (CBMs) provide inherent interpretability by first mapping input samples to high-level semantic concepts, followed by a combination of these concepts for the final classification. However, the annotation of human-understandable concepts requires extensive expert knowledge and labor, constraining the broad adoption of CBMs. On the other hand, there are a few works that leverage the knowledge of large language models (LLMs) to construct concept bottlenecks. Nevertheless, they face two essential limitations: First, they overlook the uncertainty associated with the concepts annotated by LLMs and lack a valid mechanism to quantify uncertainty about the annotated concepts, increasing the risk of errors due to hallucinations from LLMs. Additionally, they fail to incorporate the uncertainty associated with these annotations into the learning process for concept bottleneck models. To address these limitations, we propose a novel uncertainty-aware CBM method, which not only rigorously quantifies the uncertainty of LLM-annotated concept labels with valid and distribution-free guarantees, but also incorporates...

Originally published on March 02, 2026. Curated by AI News.

Related Articles

Llms

[D] We reimplemented Claude Code entirely in Python — open source, works with local models

Hey everyone, We just released Claw Code Agent — a full Python reimplementation of the Claude Code agent architecture, based on the rever...

Reddit - Machine Learning · 1 min ·
Llms

[D] Production gaps in context-window compression for AI agent memory

've been working on AI memory infrastructure and recently spent a few weeks reading through the source code of an open-source context-win...

Reddit - Machine Learning · 1 min ·
Llms

How Claude Web tried to break out its container, provided all files on the system, scanned the networks, etc

Originally wasn't going to write about this - on one hand thought it's prolly already known, on the other hand I didn't feel like it was ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Combining the robot operating system with LLMs for natural-language control

Over the past few decades, robotics researchers have developed a wide range of increasingly advanced robots that can autonomously complet...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime