[2507.08334] CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes
About this article
Abstract page for arXiv paper 2507.08334: CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes
Computer Science > Computer Vision and Pattern Recognition arXiv:2507.08334 (cs) [Submitted on 11 Jul 2025 (v1), last revised 3 Mar 2026 (this version, v3)] Title:CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes Authors:Sangwon Kim, Kyoungoh Lee, Jeyoun Dong, Kwang-Ju Kim View a PDF of the paper titled CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes, by Sangwon Kim and Kyoungoh Lee and Jeyoun Dong and Kwang-Ju Kim View PDF HTML (experimental) Abstract:Generative concept bottleneck models aim to enable interpretable generation by routing synthesis through explicit, user-facing concepts. In practice, prior approaches often rely on non-explicit bottleneck representations (e.g., vision cues or opaque concept embeddings) or black-box decoders to preserve image quality, which weakens the transparency. We propose CoBELa (Concept Bottlenecks on Energy Landscapes), a decoder-free, energy-based framework that eliminates non-explicit bottleneck representations by conditioning generation entirely through per-concept energy functions over the latent space of a frozen pretrained generator-requiring no generator retraining and enabling post-hoc interpretation. Because these concept energies compose additively, CoBELa naturally supports compositional concept interventions: concept conjunction and negation are realized by summing or subtracting per-concept energy terms without additional training. A diffusion-schedule...