[2603.03331] PulseLM: A Foundation Dataset and Benchmark for PPG-Text Learning
About this article
Abstract page for arXiv paper 2603.03331: PulseLM: A Foundation Dataset and Benchmark for PPG-Text Learning
Computer Science > Computation and Language arXiv:2603.03331 (cs) [Submitted on 10 Feb 2026] Title:PulseLM: A Foundation Dataset and Benchmark for PPG-Text Learning Authors:Hung Manh Pham, Jinyang Wu, Xiao Ma, Yiming Zhang, Yixin Xu, Aaqib Saeed, Bin Zhu, Zhou Pan, Dong Ma View a PDF of the paper titled PulseLM: A Foundation Dataset and Benchmark for PPG-Text Learning, by Hung Manh Pham and 8 other authors View PDF HTML (experimental) Abstract:Photoplethysmography (PPG) is a widely used non-invasive sensing modality for continuous cardiovascular and physiological monitoring across clinical, laboratory, and wearable settings. While existing PPG datasets support a broad range of downstream tasks, they typically provide supervision in the form of numerical measurements or task-specific labels, limiting their suitability for language-based physiological reasoning and multimodal foundation models. In this work, we introduce PulseLM, a large-scale PPG-text dataset designed to bridge raw PPG waveforms and natural language through a unified, closed-ended question answering (QA) formulation. PulseLM aggregates PPG recordings from fifteen publicly available sources and harmonizes heterogeneous annotations into twelve common physiologically QA tasks. The dataset comprises 1.31 million standardized 10-second PPG segments, associated with 3.15 million question-answer pairs. We further define reproducible preprocessing, supervision, and evaluation protocols and establish baseline benchm...