[2603.19289] Speculating Experts Accelerates Inference for Mixture-of-Experts
About this article
Abstract page for arXiv paper 2603.19289: Speculating Experts Accelerates Inference for Mixture-of-Experts
Computer Science > Machine Learning arXiv:2603.19289 (cs) [Submitted on 9 Mar 2026] Title:Speculating Experts Accelerates Inference for Mixture-of-Experts Authors:Vivan Madan, Prajwal Singhania, Abhinav Bhatele, Tom Goldstein, Ashwinee Panda View a PDF of the paper titled Speculating Experts Accelerates Inference for Mixture-of-Experts, by Vivan Madan and 4 other authors View PDF HTML (experimental) Abstract:Mixture-of-Experts (MoE) models have gained popularity as a means of scaling the capacity of large language models (LLMs) while maintaining sparse activations and reduced per-token compute. However, in memory-constrained inference settings, expert weights must be offloaded to CPU, creating a performance bottleneck from CPU-GPU transfers during decoding. We propose an expert prefetching scheme that leverages currently computed internal model representations to speculate future experts, enabling memory transfers to overlap with computation. Across multiple MoE architectures, we demonstrate that future experts can be reliably predicted by these internal representations. We also demonstrate that executing speculated experts generally maintains downstream task accuracy, thus preserving more effective compute-memory overlap by eliminating the need to re-fetch true router-selected experts. Integrated into an optimized inference engine, our approach achieves up to 14\% reduction in time per output token (TPOT) over on-demand loading of experts from CPU memory. For MoEs where s...