[2602.13308] Learning to Select Like Humans: Explainable Active Learning for Medical Imaging
Summary
This paper presents an explainable active learning framework for medical imaging that enhances data efficiency and interpretability by integrating spatial attention alignment in sample selection.
Why It Matters
The study addresses the challenge of limited labeled data in medical imaging by proposing a dual-criterion selection strategy that not only focuses on predictive uncertainty but also ensures the model learns from clinically relevant features. This approach is crucial for improving the reliability of AI in clinical settings.
Key Takeaways
- The proposed framework combines classification uncertainty and attention misalignment for sample selection.
- It significantly outperforms random sampling in accuracy across multiple medical imaging datasets.
- The method enhances both predictive performance and spatial interpretability, crucial for clinical applications.
- Utilizing only 570 samples, the approach achieves high accuracy, demonstrating data efficiency.
- Grad-CAM visualizations confirm that the model focuses on diagnostically relevant regions.
Electrical Engineering and Systems Science > Image and Video Processing arXiv:2602.13308 (eess) COVID-19 e-print Important: e-prints posted on arXiv are not peer-reviewed by arXiv; they should not be relied upon without context to guide clinical practice or health-related behavior and should not be reported in news media as established information without consulting multiple experts in the field. [Submitted on 10 Feb 2026] Title:Learning to Select Like Humans: Explainable Active Learning for Medical Imaging Authors:Ifrat Ikhtear Uddin, Longwei Wang, Xiao Qin, Yang Zhou, KC Santosh View a PDF of the paper titled Learning to Select Like Humans: Explainable Active Learning for Medical Imaging, by Ifrat Ikhtear Uddin and 4 other authors View PDF HTML (experimental) Abstract:Medical image analysis requires substantial labeled data for model training, yet expert annotation is expensive and time-consuming. Active learning (AL) addresses this challenge by strategically selecting the most informative samples for the annotation purpose, but traditional methods solely rely on predictive uncertainty while ignoring whether models learn from clinically meaningful features a critical requirement for clinical deployment. We propose an explainability-guided active learning framework that integrates spatial attention alignment into a sample acquisition process. Our approach advocates for a dual-criterion selection strategy combining: (i) classification uncertainty to identify informative ex...