[2506.23396] AICO: Feature Significance Tests for Supervised Learning
About this article
Abstract page for arXiv paper 2506.23396: AICO: Feature Significance Tests for Supervised Learning
Statistics > Machine Learning arXiv:2506.23396 (stat) [Submitted on 29 Jun 2025 (v1), last revised 2 Apr 2026 (this version, v5)] Title:AICO: Feature Significance Tests for Supervised Learning Authors:Kay Giesecke, Enguerrand Horel, Chartsiri Jirachotkulthorn View a PDF of the paper titled AICO: Feature Significance Tests for Supervised Learning, by Kay Giesecke and 2 other authors View PDF HTML (experimental) Abstract:Machine learning is central to modern science, industry, and policy, yet its predictive power often comes at the cost of transparency: we rarely know which input features truly drive a model's predictions. Without such understanding, researchers cannot draw reliable conclusions, practitioners cannot ensure fairness or accountability, and policymakers cannot trust or govern model-based decisions. Existing tools for assessing feature influence are limited; most lack statistical guarantees, and many require costly retraining or surrogate modeling, making them impractical for large modern models. We introduce AICO, a broadly applicable framework that turns model interpretability into an efficient statistical exercise. AICO tests whether each feature genuinely improves predictive performance by masking its information and measuring the resulting change. The method provides exact, finite-sample feature p-values and confidence intervals for feature importance through a simple, non-asymptotic hypothesis testing procedure. It requires no retraining, surrogate modelin...