[2603.05210] Balancing Coverage and Draft Latency in Vocabulary Trimming for Faster Speculative Decoding

[2603.05210] Balancing Coverage and Draft Latency in Vocabulary Trimming for Faster Speculative Decoding

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.05210: Balancing Coverage and Draft Latency in Vocabulary Trimming for Faster Speculative Decoding

Computer Science > Computation and Language arXiv:2603.05210 (cs) [Submitted on 5 Mar 2026] Title:Balancing Coverage and Draft Latency in Vocabulary Trimming for Faster Speculative Decoding Authors:Ofir Ben Shoham View a PDF of the paper titled Balancing Coverage and Draft Latency in Vocabulary Trimming for Faster Speculative Decoding, by Ofir Ben Shoham View PDF HTML (experimental) Abstract:Speculative decoding accelerates inference for Large Language Models by using a lightweight draft model to propose candidate tokens that are verified in parallel by a larger target model. Prior work shows that the draft model often dominates speculative decoding latency, since it generates tokens sequentially and incurs high cost from its language modeling head as vocabulary size grows. This exposes a fundamental trade-off in draft model design: larger vocabularies improve token coverage and agreement with the target model, but incur higher draft latency, while smaller vocabularies reduce latency at the risk of missing tokens required for accurate draft generation. We address this trade-off through vocabulary trimming for draft models, motivated by the observation that domain-specific workloads use only a small fraction of the full vocabulary. We cast draft vocabulary selection as a constrained optimization problem that balances token coverage and draft latency. Coverage is computed over assistant responses in the training data, while latency is estimated using architecture-aware FLOPs...

Originally published on March 06, 2026. Curated by AI News.

Related Articles

Llms

An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I published a paper today on something I've been calling postural manipulation. The short version: ordi...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I've been documenting what I'm calling postural manipulation: a specific class of language that install...

Reddit - Machine Learning · 1 min ·
There are more AI health tools than ever—but how well do they work? | MIT Technology Review
Llms

There are more AI health tools than ever—but how well do they work? | MIT Technology Review

Earlier this month, Microsoft launched Copilot Health, a new space within its Copilot app where users will be able to connect their medic...

MIT Technology Review · 11 min ·
Llms

What does Gemini think of you?

I noticed that Gemini was referring back to a lot of queries I've made in the past and was using that knowledge to drive follow up prompt...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime