[2410.08559] Learning General Representation of 12-Lead Electrocardiogram with a Joint-Embedding Predictive Architecture
About this article
Abstract page for arXiv paper 2410.08559: Learning General Representation of 12-Lead Electrocardiogram with a Joint-Embedding Predictive Architecture
Computer Science > Machine Learning arXiv:2410.08559 (cs) [Submitted on 11 Oct 2024 (v1), last revised 10 Apr 2026 (this version, v5)] Title:Learning General Representation of 12-Lead Electrocardiogram with a Joint-Embedding Predictive Architecture Authors:Sehun Kim View a PDF of the paper titled Learning General Representation of 12-Lead Electrocardiogram with a Joint-Embedding Predictive Architecture, by Sehun Kim View PDF HTML (experimental) Abstract:Electrocardiogram (ECG) captures the heart's electrical signals, offering valuable information for diagnosing cardiac conditions. However, the scarcity of labeled data makes it challenging to fully leverage supervised learning in the medical domain. Self-supervised learning (SSL) offers a promising solution, enabling models to learn from unlabeled data and uncover meaningful patterns. In this paper, we show that masked modeling in the latent space can be a powerful alternative to existing self-supervised methods in the ECG domain. We introduce ECG-JEPA, an SSL model for 12-lead ECG analysis that learns semantic representations of ECG data by predicting in the hidden latent space, bypassing the need to reconstruct raw signals. This approach offers several advantages in the ECG domain: (1) it avoids producing unnecessary details, such as noise, which is common in ECG; and (2) it addresses the limitations of naive L2 loss between raw signals. Another key contribution is the introduction of Cross-Pattern Attention (CroPA), a sp...