[2602.17531] Position: Evaluation of ECG Representations Must Be Fixed
Summary
This paper critiques current benchmarking practices in 12-lead ECG representation learning, advocating for broader evaluation criteria to enhance clinical relevance and reliability.
Why It Matters
The paper addresses significant gaps in ECG representation evaluation, emphasizing the need for benchmarks that reflect a wider range of clinical information. This is crucial for advancing machine learning applications in healthcare, ensuring that models are not only accurate but also clinically meaningful.
Key Takeaways
- Current ECG benchmarking focuses too narrowly on arrhythmia and waveform morphology.
- Evaluation should include structural heart disease and patient-level forecasting.
- Applying best practices in evaluation can significantly alter conclusions about model performance.
- Randomly initialized encoders can serve as effective baseline models.
- The study provides empirical evidence supporting the need for expanded evaluation metrics.
Computer Science > Machine Learning arXiv:2602.17531 (cs) [Submitted on 19 Feb 2026] Title:Position: Evaluation of ECG Representations Must Be Fixed Authors:Zachary Berger, Daniel Prakah-Asante, John Guttag, Collin M. Stultz View a PDF of the paper titled Position: Evaluation of ECG Representations Must Be Fixed, by Zachary Berger and 3 other authors View PDF Abstract:This position paper argues that current benchmarking practice in 12-lead ECG representation learning must be fixed to ensure progress is reliable and aligned with clinically meaningful objectives. The field has largely converged on three public multi-label benchmarks (PTB-XL, CPSC2018, CSN) dominated by arrhythmia and waveform-morphology labels, even though the ECG is known to encode substantially broader clinical information. We argue that downstream evaluation should expand to include an assessment of structural heart disease and patient-level forecasting, in addition to other evolving ECG-related endpoints, as relevant clinical targets. Next, we outline evaluation best practices for multi-label, imbalanced settings, and show that when they are applied, the literature's current conclusion about which representations perform best is altered. Furthermore, we demonstrate the surprising result that a randomly initialized encoder with linear evaluation matches state-of-the-art pre-training on many tasks. This motivates the use of a random encoder as a reasonable baseline model. We substantiate our observations w...