[2602.17107] Owen-based Semantics and Hierarchy-Aware Explanation (O-Shap)
Summary
The paper presents O-Shap, a novel method for feature attribution in explainable AI, addressing limitations of traditional Shapley value approaches by incorporating hierarchical semantics for improved accuracy and interpretability.
Why It Matters
O-Shap enhances the understanding of AI model decisions, particularly in vision tasks, by addressing feature dependencies that traditional methods overlook. This advancement is crucial for developing more reliable and interpretable AI systems, especially in critical applications where transparency is essential.
Key Takeaways
- O-Shap utilizes the Owen value for hierarchical feature attribution.
- It improves upon traditional SHAP methods by ensuring semantic coherence.
- The new segmentation approach enhances attribution accuracy and interpretability.
- Experiments show O-Shap outperforms existing SHAP variants in various datasets.
- This method is particularly effective in scenarios where feature structure is significant.
Computer Science > Artificial Intelligence arXiv:2602.17107 (cs) [Submitted on 19 Feb 2026] Title:Owen-based Semantics and Hierarchy-Aware Explanation (O-Shap) Authors:Xiangyu Zhou, Chenhan Xiao, Yang Weng View a PDF of the paper titled Owen-based Semantics and Hierarchy-Aware Explanation (O-Shap), by Xiangyu Zhou and 2 other authors View PDF HTML (experimental) Abstract:Shapley value-based methods have become foundational in explainable artificial intelligence (XAI), offering theoretically grounded feature attributions through cooperative game theory. However, in practice, particularly in vision tasks, the assumption of feature independence breaks down, as features (i.e., pixels) often exhibit strong spatial and semantic dependencies. To address this, modern SHAP implementations now include the Owen value, a hierarchical generalization of the Shapley value that supports group attributions. While the Owen value preserves the foundations of Shapley values, its effectiveness critically depends on how feature groups are defined. We show that commonly used segmentations (e.g., axis-aligned or SLIC) violate key consistency properties, and propose a new segmentation approach that satisfies the $T$-property to ensure semantic alignment across hierarchy levels. This hierarchy enables computational pruning while improving attribution accuracy and interpretability. Experiments on image and tabular datasets demonstrate that O-Shap outperforms baseline SHAP variants in attribution pre...