[2602.20396] cc-Shapley: Measuring Multivariate Feature Importance Needs Causal Context
Summary
The paper introduces cc-Shapley, a method for measuring multivariate feature importance in machine learning by incorporating causal context, addressing limitations of traditional Shapley values.
Why It Matters
Understanding feature importance is crucial for explainable AI, especially in complex models. The cc-Shapley method mitigates issues like collider bias, enhancing the interpretability and reliability of machine learning insights, which is vital for both practitioners and researchers in the field.
Key Takeaways
- Traditional Shapley values may misrepresent feature importance due to collider bias.
- cc-Shapley incorporates causal context, improving accuracy in feature attribution.
- The method shows significant differences in feature importance assessments compared to conventional approaches.
Computer Science > Machine Learning arXiv:2602.20396 (cs) [Submitted on 23 Feb 2026] Title:cc-Shapley: Measuring Multivariate Feature Importance Needs Causal Context Authors:Jörg Martin, Stefan Haufe View a PDF of the paper titled cc-Shapley: Measuring Multivariate Feature Importance Needs Causal Context, by J\"org Martin and Stefan Haufe View PDF HTML (experimental) Abstract:Explainable artificial intelligence promises to yield insights into relevant features, thereby enabling humans to examine and scrutinize machine learning models or even facilitating scientific discovery. Considering the widespread technique of Shapley values, we find that purely data-driven operationalization of multivariate feature importance is unsuitable for such purposes. Even for simple problems with two features, spurious associations due to collider bias and suppression arise from considering one feature only in the observational context of the other, which can lead to misinterpretations. Causal knowledge about the data-generating process is required to identify and correct such misleading feature attributions. We propose cc-Shapley (causal context Shapley), an interventional modification of conventional observational Shapley values leveraging knowledge of the data's causal structure, thereby analyzing the relevance of a feature in the causal context of the remaining features. We show theoretically that this eradicates spurious association induced by collider bias. We compare the behavior of Sh...