[2602.15339] Benchmarking Self-Supervised Models for Cardiac Ultrasound View Classification

[2602.15339] Benchmarking Self-Supervised Models for Cardiac Ultrasound View Classification

arXiv - AI 4 min read Article

Summary

This article evaluates self-supervised learning models for cardiac ultrasound view classification, comparing USF-MAE and MoCo v3 using the CACTUS dataset.

Why It Matters

Accurate classification of cardiac ultrasound images is critical for clinical diagnosis. This study highlights the effectiveness of self-supervised learning in medical imaging, demonstrating that USF-MAE outperforms existing models, which could enhance diagnostic accuracy and efficiency in healthcare.

Key Takeaways

  • USF-MAE outperforms MoCo v3 in classifying cardiac ultrasound views.
  • The study uses a large dataset (CACTUS) with expert-annotated images.
  • Performance metrics include ROC-AUC, accuracy, F1-score, and recall.
  • Statistical significance was confirmed for the performance differences.
  • Self-supervised learning shows promise for improving automated medical imaging.

Electrical Engineering and Systems Science > Image and Video Processing arXiv:2602.15339 (eess) [Submitted on 17 Feb 2026] Title:Benchmarking Self-Supervised Models for Cardiac Ultrasound View Classification Authors:Youssef Megahed, Salma I. Megahed, Robin Ducharme, Inok Lee, Adrian D. C. Chan, Mark C. Walker, Steven Hawken View a PDF of the paper titled Benchmarking Self-Supervised Models for Cardiac Ultrasound View Classification, by Youssef Megahed and 6 other authors View PDF HTML (experimental) Abstract:Reliable interpretation of cardiac ultrasound images is essential for accurate clinical diagnosis and assessment. Self-supervised learning has shown promise in medical imaging by leveraging large unlabelled datasets to learn meaningful representations. In this study, we evaluate and compare two self-supervised learning frameworks, USF-MAE, developed by our team, and MoCo v3, on the recently introduced CACTUS dataset (37,736 images) for automated simulated cardiac view (A4C, PL, PSAV, PSMV, Random, and SC) classification. Both models used 5-fold cross-validation, enabling robust assessment of generalization performance across multiple random splits. The CACTUS dataset provides expert-annotated cardiac ultrasound images with diverse views. We adopt an identical training protocol for both models to ensure a fair comparison. Both models are configured with a learning rate of 0.0001 and a weight decay of 0.01. For each fold, we record performance metrics including ROC-AUC, ...

Related Articles

Llms

[R] Depth-first pruning transfers: GPT-2 → TinyLlama with stable gains and minimal loss

TL;DR: Removing the right layers (instead of shrinking all layers) makes transformer models ~8–12% smaller with only ~6–8% quality loss, ...

Reddit - Machine Learning · 1 min ·
Llms

Built a training stability monitor that detects instability before your loss curve shows anything — open sourced the core today

Been working on a weight divergence trajectory curvature approach to detecting neural network training instability. Treats weight updates...

Reddit - Artificial Intelligence · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime