[2409.14590] Explainable AI needs formalization
About this article
Abstract page for arXiv paper 2409.14590: Explainable AI needs formalization
Computer Science > Machine Learning arXiv:2409.14590 (cs) [Submitted on 22 Sep 2024 (v1), last revised 30 Mar 2026 (this version, v5)] Title:Explainable AI needs formalization Authors:Stefan Haufe, Rick Wilming, Benedict Clark, Rustam Zhumagambetov, Ahcène Boubekki, Jörg Martin, Danny Panknin View a PDF of the paper titled Explainable AI needs formalization, by Stefan Haufe and 6 other authors View PDF HTML (experimental) Abstract:The field of "explainable artificial intelligence" (XAI) seemingly addresses the desire that decisions of machine learning systems should be human-understandable. However, in its current state, XAI itself needs scrutiny. Popular methods cannot reliably answer relevant questions about ML models, their training data, or test inputs, because they systematically attribute importance to input features that are independent of the prediction target. This limits the utility of XAI for diagnosing and correcting data and models, for scientific discovery, and for identifying intervention targets. The fundamental reason for this is that current XAI methods do not address well-defined problems and are not evaluated against targeted criteria of explanation correctness. Researchers should formally define the problems they intend to solve and design methods accordingly. This will lead to diverse use-case-dependent notions of explanation correctness and objective metrics of explanation performance that can be used to validate XAI algorithms. Subjects: Machine Lea...