[2604.09121] Interactive ASR: Towards Human-Like Interaction and Semantic Coherence Evaluation for Agentic Speech Recognition
About this article
Abstract page for arXiv paper 2604.09121: Interactive ASR: Towards Human-Like Interaction and Semantic Coherence Evaluation for Agentic Speech Recognition
Computer Science > Computation and Language arXiv:2604.09121 (cs) [Submitted on 10 Apr 2026] Title:Interactive ASR: Towards Human-Like Interaction and Semantic Coherence Evaluation for Agentic Speech Recognition Authors:Peng Wang (1), Yanqiao Zhu (1), Zixuan Jiang (1), Qinyuan Chen (2), Xingjian Zhao (2), Xipeng Qiu (2), Wupeng Wang (3), Zhifu Gao (3), Xiangang Li (3), Kai Yu (1), Xie Chen (1) ((1) X-LANCE Lab, Shanghai Jiao Tong University, (2) School of Computer Science, Fudan University, (3) Tongyi Fun Team, Alibaba Group) View a PDF of the paper titled Interactive ASR: Towards Human-Like Interaction and Semantic Coherence Evaluation for Agentic Speech Recognition, by Peng Wang (1) and 15 other authors View PDF HTML (experimental) Abstract:Recent years have witnessed remarkable progress in automatic speech recognition (ASR), driven by advances in model architectures and large-scale training data. However, two important aspects remain underexplored. First, Word Error Rate (WER), the dominant evaluation metric for decades, treats all words equally and often fails to reflect the semantic correctness of an utterance at the sentence level. Second, interactive correction-an essential component of human communication-has rarely been systematically studied in ASR research. In this paper, we integrate these two perspectives under an agentic framework for interactive ASR. We propose leveraging LLM-as-a-Judge as a semantic-aware evaluation metric to assess recognition quality beyo...