[2602.18160] Unifying Formal Explanations: A Complexity-Theoretic Perspective

[2602.18160] Unifying Formal Explanations: A Complexity-Theoretic Perspective

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a unified framework for understanding formal explanations in machine learning, focusing on the computational complexity of sufficient and contrastive reasons for model predictions.

Why It Matters

Understanding the complexity of explanations in machine learning models is crucial for improving model interpretability and trustworthiness. This study provides insights that could enhance the development of more efficient algorithms for generating explanations, which is vital in fields where decision-making is critical.

Key Takeaways

  • Introduces a unified framework for analyzing sufficient and contrastive explanations in ML.
  • Demonstrates the influence of monotonicity, submodularity, and supermodularity on computational complexity.
  • Establishes polynomial-time results for global explainability settings across various ML models.
  • Highlights the NP-hardness of local explainability computations, emphasizing the challenges in this area.
  • Provides a deeper understanding of the properties affecting explanation generation in machine learning.

Computer Science > Machine Learning arXiv:2602.18160 (cs) [Submitted on 20 Feb 2026] Title:Unifying Formal Explanations: A Complexity-Theoretic Perspective Authors:Shahaf Bassan, Xuanxiang Huang, Guy Katz View a PDF of the paper titled Unifying Formal Explanations: A Complexity-Theoretic Perspective, by Shahaf Bassan and 2 other authors View PDF HTML (experimental) Abstract:Previous work has explored the computational complexity of deriving two fundamental types of explanations for ML model predictions: (1) *sufficient reasons*, which are subsets of input features that, when fixed, determine a prediction, and (2) *contrastive reasons*, which are subsets of input features that, when modified, alter a prediction. Prior studies have examined these explanations in different contexts, such as non-probabilistic versus probabilistic frameworks and local versus global settings. In this study, we introduce a unified framework for analyzing these explanations, demonstrating that they can all be characterized through the minimization of a unified probabilistic value function. We then prove that the complexity of these computations is influenced by three key properties of the value function: (1) *monotonicity*, (2) *submodularity*, and (3) *supermodularity* - which are three fundamental properties in *combinatorial optimization*. Our findings uncover some counterintuitive results regarding the nature of these properties within the explanation settings examined. For instance, although ...

Related Articles

Machine Learning

[P] Fused MoE Dispatch in Pure Triton: Beating CUDA-Optimized Megablocks at Inference Batch Sizes

I built a fused MoE dispatch kernel in pure Triton that handles the full forward pass for Mixture-of-Experts models. No CUDA, no vendor-s...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML Rebuttal Question

I am currently working on my response on the rebuttal acknowledgments for ICML and I doubting how to handle the strawman argument of that...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ML researcher looking to switch to a product company.

Hey, I am an AI researcher currently working in a deep tech company as a data scientist. Prior to this, I was doing my PhD. My current ro...

Reddit - Machine Learning · 1 min ·
Machine Learning

Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]

Hey guys, I’m the same creator of Netryx V2, the geolocation tool. I’ve been working on something new called COGNEX. It learns how a pers...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime