[2508.12301] WhisperRT -- Turning Whisper into a Causal Streaming Model

[2508.12301] WhisperRT -- Turning Whisper into a Causal Streaming Model

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2508.12301: WhisperRT -- Turning Whisper into a Causal Streaming Model

Computer Science > Computation and Language arXiv:2508.12301 (cs) [Submitted on 17 Aug 2025 (v1), last revised 5 Apr 2026 (this version, v2)] Title:WhisperRT -- Turning Whisper into a Causal Streaming Model Authors:Tomer Krichli, Bhiksha Raj, Joseph Keshet View a PDF of the paper titled WhisperRT -- Turning Whisper into a Causal Streaming Model, by Tomer Krichli and 2 other authors View PDF HTML (experimental) Abstract:Automatic Speech Recognition (ASR) has seen remarkable progress, with models like OpenAI Whisper and NVIDIA Canary achieving state-of-the-art (SOTA) performance in offline transcription. However, these models are not designed for streaming (online or real-time) transcription, due to limitations in their architecture and training methodology. We propose a method to turn the transformer encoder-decoder model into a low-latency streaming model. The encoder is made causal to process audio incrementally, while the decoder conditions on partial encoder states to generate tokens aligned with the available temporal context. This requires explicit synchronization between encoded input frames and token emissions. Since tokens are produced only after sufficient acoustic evidence is observed, an inherent latency arises, necessitating fine-tuning of the encoder-decoder alignment mechanism. We propose an updated inference mechanism that utilizes the fine-tuned causal encoder and decoder to yield greedy and beam-search decoding, and is shown to be locally optimal. Experime...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Things I got wrong building a confidence evaluator for local LLMs [D]

I've been building **Autodidact**, a local-first AI agent framework. The central piece is a **confidence evaluator** - something that dec...

Reddit - Machine Learning · 1 min ·
Llms

I’m convinced 90% of you building "AI Agents" are just burning money on proxy providers. [D]

Seriously, I just audited my stack and realized I’m spending more on rotating residential proxies than I am on the actual Claude and Open...

Reddit - Machine Learning · 1 min ·
Machine Learning

I recently tested Gemma 4-31B locally and I was blown away with the intelligence/size ratio of this model. These papers show how they achieved such distillation capabilities.[R]

The secret sauce here is that the student model does not just try to guess the next token in a sentence, which is how most AI is trained....

Reddit - Machine Learning · 1 min ·
Llms

How do you test AI agents in production? The unpredictability is overwhelming.[D]

I’ve been in QA for almost a decade. My mental model for quality was always: given input X, assert output Y. Now I’m on a team that’s shi...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime