[2604.02532] Feature Attribution Stability Suite: How Stable Are Post-Hoc Attributions?

[2604.02532] Feature Attribution Stability Suite: How Stable Are Post-Hoc Attributions?

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.02532: Feature Attribution Stability Suite: How Stable Are Post-Hoc Attributions?

Computer Science > Computer Vision and Pattern Recognition arXiv:2604.02532 (cs) [Submitted on 2 Apr 2026] Title:Feature Attribution Stability Suite: How Stable Are Post-Hoc Attributions? Authors:Kamalasankari Subramaniakuppusamy, Jugal Gajjar View a PDF of the paper titled Feature Attribution Stability Suite: How Stable Are Post-Hoc Attributions?, by Kamalasankari Subramaniakuppusamy and Jugal Gajjar View PDF HTML (experimental) Abstract:Post-hoc feature attribution methods are widely deployed in safety-critical vision systems, yet their stability under realistic input perturbations remains poorly characterized. Existing metrics evaluate explanations primarily under additive noise, collapse stability to a single scalar, and fail to condition on prediction preservation, conflating explanation fragility with model sensitivity. We introduce the Feature Attribution Stability Suite (FASS), a benchmark that enforces prediction-invariance filtering, decomposes stability into three complementary metrics: structural similarity, rank correlation, and top-k Jaccard overlap-and evaluates across geometric, photometric, and compression perturbations. Evaluating four attribution methods (Integrated Gradients, GradientSHAP, Grad-CAM, LIME) across four architectures and three datasets-ImageNet-1K, MS COCO, and CIFAR-10, FASS shows that stability estimates depend critically on perturbation family and prediction-invariance filtering. Geometric perturbations expose substantially greater attr...

Originally published on April 06, 2026. Curated by AI News.

Related Articles

Machine Learning

Flux maintains facial geometry and spatial coherence across 5 sequential iterative edits - is anything else doing this at this level?

One woman. 5 Different Prompts. Perfect Contextual Preservation Playing around with Flux again and thought I'll try it with a model chang...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[P] PCA before truncation makes non-Matryoshka embeddings compressible: results on BGE-M3 [P]

Most embedding models are not Matryoshka-trained, so naive dimension truncation tends to destroy them. I tested a simple alternative: fit...

Reddit - Machine Learning · 1 min ·
Machine Learning

Looking for Feedback & Improvement Ideas[P]

Hey everyone, I recently built a machine learning project and would really appreciate some honest feedback from this community. LINK- htt...

Reddit - Machine Learning · 1 min ·
Machine Learning

Why Anthropic’s new model has cybersecurity experts rattled

submitted by /u/ThereWas [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime