[2603.22624] Toward Faithful Segmentation Attribution via Benchmarking and Dual-Evidence Fusion
About this article
Abstract page for arXiv paper 2603.22624: Toward Faithful Segmentation Attribution via Benchmarking and Dual-Evidence Fusion
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.22624 (cs) [Submitted on 23 Mar 2026] Title:Toward Faithful Segmentation Attribution via Benchmarking and Dual-Evidence Fusion Authors:Abu Noman Md Sakib, OFM Riaz Rahman Aranya, Kevin Desai, Zijie Zhang View a PDF of the paper titled Toward Faithful Segmentation Attribution via Benchmarking and Dual-Evidence Fusion, by Abu Noman Md Sakib and 3 other authors View PDF HTML (experimental) Abstract:Attribution maps for semantic segmentation are almost always judged by visual plausibility. Yet looking convincing does not guarantee that the highlighted pixels actually drive the model's prediction, nor that attribution credit stays within the target region. These questions require a dedicated evaluation protocol. We introduce a reproducible benchmark that tests intervention-based faithfulness, off-target leakage, perturbation robustness, and runtime on Pascal VOC and SBD across three pretrained backbones. To further demonstrate the benchmark, we propose Dual-Evidence Attribution (DEA), a lightweight correction that fuses gradient evidence with region-level intervention signals through agreement-weighted fusion. DEA increases emphasis where both sources agree and retains causal support when gradient responses are unstable. Across all completed runs, DEA consistently improves deletion-based faithfulness over gradient-only baselines and preserves strong robustness, at the cost of additional compute from intervent...