[2509.16622] Audio-Conditioned Diffusion LLMs for ASR and Deliberation Processing
About this article
Abstract page for arXiv paper 2509.16622: Audio-Conditioned Diffusion LLMs for ASR and Deliberation Processing
Electrical Engineering and Systems Science > Audio and Speech Processing arXiv:2509.16622 (eess) [Submitted on 20 Sep 2025 (v1), last revised 27 Feb 2026 (this version, v3)] Title:Audio-Conditioned Diffusion LLMs for ASR and Deliberation Processing Authors:Mengqi Wang, Zhan Liu, Zengrui Jin, Guangzhi Sun, Chao Zhang, Philip C. Woodland View a PDF of the paper titled Audio-Conditioned Diffusion LLMs for ASR and Deliberation Processing, by Mengqi Wang and 5 other authors View PDF HTML (experimental) Abstract:Diffusion-based large language models (DLLMs) have recently attracted growing interest as an alternative to autoregressive decoders. In this work, we present an empirical study on using the diffusion-based large language model LLaDA for automatic speech recognition (ASR). We first investigate its use as an external deliberation-based processing module for Whisper-LLaMA transcripts. By leveraging the bidirectional attention and denoising capabilities of LLaDA, we explore random masking, low-confidence masking, and semi-autoregressive strategies, showing that Whisper-LLaDA substantially reduces WER compared with the baseline. On LibriSpeech, the best cascade system achieves 2.25%/4.94% WER on test-clean/test-other, representing a 12.3% relative improvement over the Whisper-LLaMA baseline on the test-other split. In contrast, a plain-text LLaDA without acoustic features fails to improve accuracy, highlighting the importance of audio-conditioned embeddings. We further evalua...