[2509.21385] Debugging Concept Bottleneck Models through Removal and Retraining
About this article
Abstract page for arXiv paper 2509.21385: Debugging Concept Bottleneck Models through Removal and Retraining
Computer Science > Computer Vision and Pattern Recognition arXiv:2509.21385 (cs) [Submitted on 23 Sep 2025 (v1), last revised 25 Mar 2026 (this version, v2)] Title:Debugging Concept Bottleneck Models through Removal and Retraining Authors:Eric Enouen, Sainyam Galhotra View a PDF of the paper titled Debugging Concept Bottleneck Models through Removal and Retraining, by Eric Enouen and 1 other authors View PDF HTML (experimental) Abstract:Concept Bottleneck Models (CBMs) use a set of human-interpretable concepts to predict the final task label, enabling domain experts to not only validate the CBM's predictions, but also intervene on incorrect concepts at test time. However, these interventions fail to address systemic misalignment between the CBM and the expert's reasoning, such as when the model learns shortcuts from biased data. To address this, we present a general interpretable debugging framework for CBMs that follows a two-step process of Removal and Retraining. In the Removal step, experts use concept explanations to identify and remove any undesired concepts. In the Retraining step, we introduce CBDebug, a novel method that leverages the interpretability of CBMs as a bridge for converting concept-level user feedback into sample-level auxiliary labels. These labels are then used to apply supervised bias mitigation and targeted augmentation, reducing the model's reliance on undesired concepts. We evaluate our framework with both real and automated expert feedback, and ...