[2602.19938] A Replicate-and-Quantize Strategy for Plug-and-Play Load Balancing of Sparse Mixture-of-Experts LLMs
Summary
The paper presents a Replicate-and-Quantize strategy for improving load balancing in Sparse Mixture-of-Experts (SMoE) models, enhancing inference efficiency without retraining.
Why It Matters
As large language models (LLMs) become more prevalent, efficient load balancing during inference is crucial for optimizing performance and resource utilization. This research addresses existing inefficiencies in SMoE architectures, offering a novel approach that can significantly improve deployment outcomes.
Key Takeaways
- Load imbalance in SMoE models worsens with larger batch sizes.
- Expert selection frequency does not correlate with their importance.
- A small calibration set can estimate expert workload effectively.
- The proposed R&Q framework allows for dynamic workload rebalancing without retraining.
- Experiments show a 1.4x reduction in imbalance while maintaining accuracy.
Computer Science > Machine Learning arXiv:2602.19938 (cs) [Submitted on 23 Feb 2026] Title:A Replicate-and-Quantize Strategy for Plug-and-Play Load Balancing of Sparse Mixture-of-Experts LLMs Authors:Zijie Liu, Jie Peng, Jinhao Duan, Zirui Liu, Kaixiong Zhou, Mingfu Liang, Luke Simon, Xi Liu, Zhaozhuo Xu, Tianlong Chen View a PDF of the paper titled A Replicate-and-Quantize Strategy for Plug-and-Play Load Balancing of Sparse Mixture-of-Experts LLMs, by Zijie Liu and 9 other authors View PDF HTML (experimental) Abstract:Sparse Mixture-of-Experts (SMoE) architectures are increasingly used to scale large language models efficiently, delivering strong accuracy under fixed compute budgets. However, SMoE models often suffer from severe load imbalance across experts, where a small subset of experts receives most tokens while others are underutilized. Prior work has focused mainly on training-time solutions such as routing regularization or auxiliary losses, leaving inference-time behavior, which is critical for deployment, less explored. We present a systematic analysis of expert routing during inference and identify three findings: (i) load imbalance persists and worsens with larger batch sizes, (ii) selection frequency does not reliably reflect expert importance, and (iii) overall expert workload and importance can be estimated using a small calibration set. These insights motivate inference-time mechanisms that rebalance workloads without retraining or router modification. We ...