[2602.15087] StrokeNeXt: A Siamese-encoder Approach for Brain Stroke Classification in Computed Tomography Imagery
Summary
StrokeNeXt introduces a Siamese-encoder model for classifying brain strokes in CT images, achieving high accuracy and low misclassification rates.
Why It Matters
This research is significant as it addresses the critical need for accurate stroke classification, which can lead to timely and effective treatment. The model's high performance could enhance diagnostic processes in medical imaging, potentially improving patient outcomes.
Key Takeaways
- StrokeNeXt utilizes a dual-branch design with ConvNeXt encoders for improved stroke classification.
- The model achieves accuracies and F1-scores up to 0.988, outperforming existing methods.
- Statistical tests confirm significant performance improvements over baseline models.
- Low inference time and fast convergence make StrokeNeXt practical for clinical use.
- Robust performance across diagnostic categories enhances reliability in stroke detection.
Electrical Engineering and Systems Science > Image and Video Processing arXiv:2602.15087 (eess) [Submitted on 16 Feb 2026] Title:StrokeNeXt: A Siamese-encoder Approach for Brain Stroke Classification in Computed Tomography Imagery Authors:Leo Thomas Ramos, Angel D. Sappa View a PDF of the paper titled StrokeNeXt: A Siamese-encoder Approach for Brain Stroke Classification in Computed Tomography Imagery, by Leo Thomas Ramos and 1 other authors View PDF HTML (experimental) Abstract:We present StrokeNeXt, a model for stroke classification in 2D Computed Tomography (CT) images. StrokeNeXt employs a dual-branch design with two ConvNeXt encoders, whose features are fused through a lightweight convolutional decoder based on stacked 1D operations, including a bottleneck projection and transformation layers, and a compact classification head. The model is evaluated on a curated dataset of 6,774 CT images, addressing both stroke detection and subtype classification between ischemic and hemorrhage cases. StrokeNeXt consistently outperforms convolutional and Transformer-based baselines, reaching accuracies and F1-scores of up to 0.988. Paired statistical tests confirm that the performance gains are statistically significant, while class-wise sensitivity and specificity demonstrate robust behavior across diagnostic categories. Calibration analysis shows reduced prediction error compared to competing methods, and confusion matrix results indicate low misclassification rates. In addition,...