[2508.12301] WhisperRT -- Turning Whisper into a Causal Streaming Model
About this article
Abstract page for arXiv paper 2508.12301: WhisperRT -- Turning Whisper into a Causal Streaming Model
Computer Science > Computation and Language arXiv:2508.12301 (cs) [Submitted on 17 Aug 2025 (v1), last revised 5 Apr 2026 (this version, v2)] Title:WhisperRT -- Turning Whisper into a Causal Streaming Model Authors:Tomer Krichli, Bhiksha Raj, Joseph Keshet View a PDF of the paper titled WhisperRT -- Turning Whisper into a Causal Streaming Model, by Tomer Krichli and 2 other authors View PDF HTML (experimental) Abstract:Automatic Speech Recognition (ASR) has seen remarkable progress, with models like OpenAI Whisper and NVIDIA Canary achieving state-of-the-art (SOTA) performance in offline transcription. However, these models are not designed for streaming (online or real-time) transcription, due to limitations in their architecture and training methodology. We propose a method to turn the transformer encoder-decoder model into a low-latency streaming model. The encoder is made causal to process audio incrementally, while the decoder conditions on partial encoder states to generate tokens aligned with the available temporal context. This requires explicit synchronization between encoded input frames and token emissions. Since tokens are produced only after sufficient acoustic evidence is observed, an inherent latency arises, necessitating fine-tuning of the encoder-decoder alignment mechanism. We propose an updated inference mechanism that utilizes the fine-tuned causal encoder and decoder to yield greedy and beam-search decoding, and is shown to be locally optimal. Experime...