[2507.08334] CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes

[2507.08334] CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2507.08334: CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes

Computer Science > Computer Vision and Pattern Recognition arXiv:2507.08334 (cs) [Submitted on 11 Jul 2025 (v1), last revised 3 Mar 2026 (this version, v3)] Title:CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes Authors:Sangwon Kim, Kyoungoh Lee, Jeyoun Dong, Kwang-Ju Kim View a PDF of the paper titled CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes, by Sangwon Kim and Kyoungoh Lee and Jeyoun Dong and Kwang-Ju Kim View PDF HTML (experimental) Abstract:Generative concept bottleneck models aim to enable interpretable generation by routing synthesis through explicit, user-facing concepts. In practice, prior approaches often rely on non-explicit bottleneck representations (e.g., vision cues or opaque concept embeddings) or black-box decoders to preserve image quality, which weakens the transparency. We propose CoBELa (Concept Bottlenecks on Energy Landscapes), a decoder-free, energy-based framework that eliminates non-explicit bottleneck representations by conditioning generation entirely through per-concept energy functions over the latent space of a frozen pretrained generator-requiring no generator retraining and enabling post-hoc interpretation. Because these concept energies compose additively, CoBELa naturally supports compositional concept interventions: concept conjunction and negation are realized by summing or subtracting per-concept energy terms without additional training. A diffusion-schedule...

Originally published on March 04, 2026. Curated by AI News.

Related Articles

Machine Learning

I tried building a memory-first AI… and ended up discovering smaller models can beat larger ones

Dataset Model Acc F1 Δ vs Log Δ vs Static Avg Params Peak Params Steps Infer ms Size Banking77-20 Logistic TF-IDF 92.37% 0.9230 +0.00pp +...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] Howcome Muon is only being used for Transformers?

Muon has quickly been adopted in LLM training, yet we don't see it being talked about in other contexts. Searches for Muon on ConvNets tu...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] Run Karpathy's Autoresearch for $0.44 instead of $24 — Open-source parallel evolution pipeline on SageMaker Spot

TL;DR: I built an open-source pipeline that runs Karpathy's autoresearch on SageMaker Spot instances — 25 autonomous ML experiments for $...

Reddit - Machine Learning · 1 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime