[2603.27667] EvA: An Evidence-First Audio Understanding Paradigm for LALMs
About this article
Abstract page for arXiv paper 2603.27667: EvA: An Evidence-First Audio Understanding Paradigm for LALMs
Computer Science > Sound arXiv:2603.27667 (cs) [Submitted on 29 Mar 2026] Title:EvA: An Evidence-First Audio Understanding Paradigm for LALMs Authors:Xinyuan Xie, Shunian Chen, Zhiheng Liu, Yuhao Zhang, Zhiqiang Lv, Liyin Liang, Benyou Wang View a PDF of the paper titled EvA: An Evidence-First Audio Understanding Paradigm for LALMs, by Xinyuan Xie and 6 other authors View PDF HTML (experimental) Abstract:Large Audio Language Models (LALMs) still struggle in complex acoustic scenes because they often fail to preserve task-relevant acoustic evidence before reasoning begins. We call this failure the evidence bottleneck: state-of-the-art systems show larger deficits in evidence extraction than in downstream reasoning, suggesting that the main limitation lies in upstream perception rather than reasoning policy. To address this problem, we propose EvA (Evidence-First Audio), a dual-path architecture that combines Whisper and CED-Base through non-compressive, time-aligned fusion. EvA first aggregates intermediate CED layers to preserve multi-scale acoustic cues, then aligns the aggregated CED features to the Whisper timeline and adds the two streams without changing sequence length. We also build EvA-Perception, a large-scale open-source training set with about 54K event-ordered captions (150 h) and about 500K QA pairs. Under a unified zero-shot protocol, EvA achieves the best open-source Perception scores on MMAU, MMAR, and MMSU, and improves over Kimi-Audio-7B on all reported m...