[2602.16052] MoE-Spec: Expert Budgeting for Efficient Speculative Decoding
Summary
The paper introduces MoE-Spec, a method for improving efficiency in speculative decoding of Large Language Models (LLMs) by optimizing expert budgeting, resulting in significant throughput gains.
Why It Matters
As LLMs become increasingly integral to various applications, optimizing their performance is crucial. MoE-Spec addresses memory bottlenecks in Mixture-of-Experts models, enhancing throughput and enabling more efficient use of computational resources, which is vital for both research and practical applications in AI.
Key Takeaways
- MoE-Spec improves speculative decoding efficiency by optimizing expert usage.
- The method can yield 10-30% higher throughput compared to existing baselines.
- It allows for a trade-off between accuracy and latency, enhancing flexibility in model deployment.
Computer Science > Machine Learning arXiv:2602.16052 (cs) [Submitted on 17 Feb 2026] Title:MoE-Spec: Expert Budgeting for Efficient Speculative Decoding Authors:Bradley McDanel, Steven Li, Sruthikesh Surineni, Harshit Khaitan View a PDF of the paper titled MoE-Spec: Expert Budgeting for Efficient Speculative Decoding, by Bradley McDanel and 3 other authors View PDF HTML (experimental) Abstract:Speculative decoding accelerates Large Language Model (LLM) inference by verifying multiple drafted tokens in parallel. However, for Mixture-of-Experts (MoE) models, this parallelism introduces a severe bottleneck: large draft trees activate many unique experts, significantly increasing memory pressure and diminishing speedups from speculative decoding relative to autoregressive decoding. Prior methods reduce speculation depth when MoE verification becomes expensive. We propose MoE-Spec, a training-free verification-time expert budgeting method that decouples speculation depth from memory cost by enforcing a fixed expert capacity limit at each layer, loading only the experts that contribute most to verification and dropping the long tail of rarely used experts that drive bandwidth overhead. Experiments across multiple model scales and datasets show that this method yields 10--30\% higher throughput than state-of-the-art speculative decoding baselines (EAGLE-3) at comparable quality, with flexibility to trade accuracy for further latency reductions through tighter budgets. Comments: S...