[2602.21178] XMorph: Explainable Brain Tumor Analysis Via LLM-Assisted Hybrid Deep Intelligence
Summary
XMorph presents a novel framework for explainable brain tumor analysis, achieving 96% accuracy while addressing interpretability and computational efficiency in AI-driven medical imaging.
Why It Matters
The study highlights the critical need for explainability in AI applications within healthcare, particularly in diagnosing brain tumors. By combining advanced deep learning techniques with interpretable AI, XMorph aims to bridge the gap between high performance and clinical usability, potentially improving patient outcomes and fostering trust in AI systems.
Key Takeaways
- XMorph achieves 96% classification accuracy for brain tumors.
- The framework enhances interpretability through a dual-channel explainable AI module.
- Information-Weighted Boundary Normalization improves morphological representation of tumors.
- The study addresses the limitations of conventional AI models in clinical settings.
- Public availability of source code promotes transparency and further research.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.21178 (cs) [Submitted on 24 Feb 2026] Title:XMorph: Explainable Brain Tumor Analysis Via LLM-Assisted Hybrid Deep Intelligence Authors:Sepehr Salem Ghahfarokhi, M. Moein Esfahani, Raj Sunderraman, Vince Calhoun, Mohammed Alser View a PDF of the paper titled XMorph: Explainable Brain Tumor Analysis Via LLM-Assisted Hybrid Deep Intelligence, by Sepehr Salem Ghahfarokhi and 4 other authors View PDF HTML (experimental) Abstract:Deep learning has significantly advanced automated brain tumor diagnosis, yet clinical adoption remains limited by interpretability and computational constraints. Conventional models often act as opaque ''black boxes'' and fail to quantify the complex, irregular tumor boundaries that characterize malignant growth. To address these challenges, we present XMorph, an explainable and computationally efficient framework for fine-grained classification of three prominent brain tumor types: glioma, meningioma, and pituitary tumors. We propose an Information-Weighted Boundary Normalization (IWBN) mechanism that emphasizes diagnostically relevant boundary regions alongside nonlinear chaotic and clinically validated features, enabling a richer morphological representation of tumor growth. A dual-channel explainable AI module combines GradCAM++ visual cues with LLM-generated textual rationales, translating model reasoning into clinically interpretable insights. The proposed framework achieves a ...