[2601.00428] Interpretable ML Under the Microscope: Performance, Meta-Features, and the Regression-Classification Predictability Gap
About this article
Abstract page for arXiv paper 2601.00428: Interpretable ML Under the Microscope: Performance, Meta-Features, and the Regression-Classification Predictability Gap
Computer Science > Machine Learning arXiv:2601.00428 (cs) [Submitted on 1 Jan 2026 (v1), last revised 26 Mar 2026 (this version, v2)] Title:Interpretable ML Under the Microscope: Performance, Meta-Features, and the Regression-Classification Predictability Gap Authors:Mattia Billa, Giovanni Orlandi, Veronica Guidetti, Federica Mandreoli View a PDF of the paper titled Interpretable ML Under the Microscope: Performance, Meta-Features, and the Regression-Classification Predictability Gap, by Mattia Billa and 3 other authors View PDF HTML (experimental) Abstract:As machine learning models are increasingly deployed in high-stakes domains, the need for interpretability has grown to meet strict regulatory and accountability constraints. Despite this interest, systematic evaluations of inherently interpretable models for tabular data remain scarce and often focus solely on aggregated performance. To address this gap, we evaluate sixteen interpretable methods, including Explainable Boosting Machines (EBMs), Symbolic Regression (SR), and Generalized Optimal Sparse Decision Trees, across 216 real-world tabular datasets. We assess predictive accuracy, computational efficiency, and generalization under distributional shifts. Moving beyond aggregate performance rankings, we further analyze how model behavior varies with dataset meta-features and operationalize these descriptors to study algorithm selection. Our analyses reveal a clear dichotomy: in regression tasks, models exhibit a pred...