[2603.24801] Dissecting Model Failures in Abdominal Aortic Aneurysm Segmentation through Explainability-Driven Analysis
About this article
Abstract page for arXiv paper 2603.24801: Dissecting Model Failures in Abdominal Aortic Aneurysm Segmentation through Explainability-Driven Analysis
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.24801 (cs) [Submitted on 25 Mar 2026] Title:Dissecting Model Failures in Abdominal Aortic Aneurysm Segmentation through Explainability-Driven Analysis Authors:Abu Noman Md Sakib, Merjulah Roby, Zijie Zhang, Satish Muluk, Mark K. Eskandari, Ender A. Finol View a PDF of the paper titled Dissecting Model Failures in Abdominal Aortic Aneurysm Segmentation through Explainability-Driven Analysis, by Abu Noman Md Sakib and 5 other authors View PDF HTML (experimental) Abstract:Computed tomography image segmentation of complex abdominal aortic aneurysms (AAA) often fails because the models assign internal focus to irrelevant structures or do not focus on thin, low-contrast targets. Where the model looks is the primary training signal, and thus we propose an Explainable AI (XAI) guided encoder shaping framework. Our method computes a dense, attribution-based encoder focus map ("XAI field") from the final encoder block and uses it in two complementary ways: (i) we align the predicted probability mass to the XAI field to promote agreement between focus and output; and (ii) we route the field into a lightweight refinement pathway and a confidence prior that modulates logits at inference, suppressing distractors while preserving subtle structures. The objective terms serve only as control signals; the contribution is the integration of attribution guidance into representation and decoding. We evaluate clinically valid...