[2603.01274] GlassMol: Interpretable Molecular Property Prediction with Concept Bottleneck Models
About this article
Abstract page for arXiv paper 2603.01274: GlassMol: Interpretable Molecular Property Prediction with Concept Bottleneck Models
Computer Science > Machine Learning arXiv:2603.01274 (cs) [Submitted on 1 Mar 2026] Title:GlassMol: Interpretable Molecular Property Prediction with Concept Bottleneck Models Authors:Oscar Rivera, Ziqing Wang, Matthieu Dagommer, Abhishek Pandey, Kaize Ding View a PDF of the paper titled GlassMol: Interpretable Molecular Property Prediction with Concept Bottleneck Models, by Oscar Rivera and 4 other authors View PDF HTML (experimental) Abstract:Machine learning accelerates molecular property prediction, yet state-of-the-art Large Language Models and Graph Neural Networks operate as black boxes. In drug discovery, where safety is critical, this opacity risks masking false correlations and excluding human expertise. Existing interpretability methods suffer from the effectiveness-trustworthiness trade-off: explanations may fail to reflect a model's true reasoning, degrade performance, or lack domain grounding. Concept Bottleneck Models (CBMs) offer a solution by projecting inputs to human-interpretable concepts before readout, ensuring that explanations are inherently faithful to the decision process. However, adapting CBMs to chemistry faces three challenges: the Relevance Gap (selecting task-relevant concepts from a large descriptor space), the Annotation Gap (obtaining concept supervision for molecular data), and the Capacity Gap (degrading performance due to bottleneck constraints). We introduce GlassMol, a model-agnostic CBM that addresses these gaps through automated con...