[2603.00312] How Well Do Multimodal Models Reason on ECG Signals?
About this article
Abstract page for arXiv paper 2603.00312: How Well Do Multimodal Models Reason on ECG Signals?
Computer Science > Artificial Intelligence arXiv:2603.00312 (cs) [Submitted on 27 Feb 2026] Title:How Well Do Multimodal Models Reason on ECG Signals? Authors:Maxwell A. Xu, Harish Haresumadram, Catherine W. Liu, Patrick Langer, Jathurshan Pradeepkumar, Wanting Mao, Sunita J. Ferns, Aradhana Verma, Jimeng Sun, Paul Schmiedmayer, Xin Liu, Daniel McDuff, Emily B. Fox, James M. Rehg View a PDF of the paper titled How Well Do Multimodal Models Reason on ECG Signals?, by Maxwell A. Xu and 13 other authors View PDF HTML (experimental) Abstract:While multimodal large language models offer a promising solution to the "black box" nature of health AI by generating interpretable reasoning traces, verifying the validity of these traces remains a critical challenge. Existing evaluation methods are either unscalable, relying on manual clinician review, or superficial, utilizing proxy metrics (e.g. QA) that fail to capture the semantic correctness of clinical logic. In this work, we introduce a reproducible framework for evaluating reasoning in ECG signals. We propose decomposing reasoning into two distinct, components: (i) Perception, the accurate identification of patterns within the raw signal, and (ii) Deduction, the logical application of domain knowledge to those patterns. To evaluate Perception, we employ an agentic framework that generates code to empirically verify the temporal structures described in the reasoning trace. To evaluate Deduction, we measure the alignment of the mo...