[2507.23497] Sufficient, Necessary and Complete Causal Explanations in Image Classification
Summary
This paper explores causal explanations in image classification, demonstrating their formal properties and computability, while introducing new concepts like δ-complete explanations.
Why It Matters
Understanding causal explanations in image classification is crucial for improving the transparency and interpretability of AI models. This research provides a rigorous framework that can enhance the reliability of AI systems in critical applications, such as healthcare and autonomous driving.
Key Takeaways
- Causal explanations offer formal rigor and computability for image classifiers.
- The paper introduces δ-complete explanations, enhancing interpretability.
- Algorithms developed are black-box and require no model internals.
- Different models exhibit varying patterns of sufficiency and necessity.
- Efficient computation of explanations averages 6 seconds per image.
Computer Science > Artificial Intelligence arXiv:2507.23497 (cs) [Submitted on 31 Jul 2025 (v1), last revised 19 Feb 2026 (this version, v2)] Title:Sufficient, Necessary and Complete Causal Explanations in Image Classification Authors:David A Kelly, Hana Chockler View a PDF of the paper titled Sufficient, Necessary and Complete Causal Explanations in Image Classification, by David A Kelly and Hana Chockler View PDF Abstract:Existing algorithms for explaining the outputs of image classifiers are based on a variety of approaches and produce explanations that frequently lack formal rigour. On the other hand, logic-based explanations are formally and rigorously defined but their computability relies on strict assumptions about the model that do not hold on image classifiers. In this paper, we show that causal explanations, in addition to being formally and rigorously defined, enjoy the same formal properties as logic-based ones, while still lending themselves to black-box algorithms and being a natural fit for image classifiers. We prove formal properties of causal explanations and their equivalence to logic-based explanations. We demonstrate how to subdivide an image into its sufficient and necessary components. We introduce $\delta$-complete explanations, which have a minimum confidence threshold and 1-complete causal explanations, explanations that are classified with the same confidence as the original image. We implement our definitions, and our experimental results demon...