[2603.00054] Expert Divergence Learning for MoE-based Language Models
About this article
Abstract page for arXiv paper 2603.00054: Expert Divergence Learning for MoE-based Language Models
Computer Science > Machine Learning arXiv:2603.00054 (cs) [Submitted on 10 Feb 2026] Title:Expert Divergence Learning for MoE-based Language Models Authors:Jiaang Li, Haibin Chen, Langming Liu, Yujin Yuan, Yadao Wang, Yizhen Zhang, Chengting Yu, Xin Tong, Weidong Zhang, Shilei Liu, Wenbo Su, Bo Zheng View a PDF of the paper titled Expert Divergence Learning for MoE-based Language Models, by Jiaang Li and 11 other authors View PDF HTML (experimental) Abstract:The Mixture-of-Experts (MoE) architecture is a powerful technique for scaling language models, yet it often suffers from expert homogenization, where experts learn redundant functionalities, thereby limiting MoE's full potential. To address this, we introduce Expert Divergence Learning, a novel pre-training strategy that explicitly encourages functional specialization among experts. Our method incorporates a label-driven auxiliary loss that leverages domain labels inherent in pre-training corpora to maximize the Jensen-Shannon Divergence between the expert routing distributions of different data domains. This optimization objective guides the model to develop diverged routing policies for varied domains and closer routing policies for the same domain, which leads to emergent and organized expert specialization. We validate our approach by pre-training MoE models of up to 15 billion parameters from scratch. Experimental results demonstrate that models trained with Expert Divergence Learning not only achieve a lower lang...