[2602.23947] Hierarchical Concept-based Interpretable Models
About this article
Abstract page for arXiv paper 2602.23947: Hierarchical Concept-based Interpretable Models
Computer Science > Machine Learning arXiv:2602.23947 (cs) [Submitted on 27 Feb 2026] Title:Hierarchical Concept-based Interpretable Models Authors:Oscar Hill, Mateo Espinosa Zarlenga, Mateja Jamnik View a PDF of the paper titled Hierarchical Concept-based Interpretable Models, by Oscar Hill and 2 other authors View PDF Abstract:Modern deep neural networks remain challenging to interpret due to the opacity of their latent representations, impeding model understanding, debugging, and debiasing. Concept Embedding Models (CEMs) address this by mapping inputs to human-interpretable concept representations from which tasks can be predicted. Yet, CEMs fail to represent inter-concept relationships and require concept annotations at different granularities during training, limiting their applicability. In this paper, we introduce Hierarchical Concept Embedding Models (HiCEMs), a new family of CEMs that explicitly model concept relationships through hierarchical structures. To enable HiCEMs in real-world settings, we propose Concept Splitting, a method for automatically discovering finer-grained sub-concepts from a pretrained CEM's embedding space without requiring additional annotations. This allows HiCEMs to generate fine-grained explanations from limited concept labels, reducing annotation burdens. Our evaluation across multiple datasets, including a user study and experiments on PseudoKitchens, a newly proposed concept-based dataset of 3D kitchen renders, demonstrates that (1) C...