[2604.00801] Routing-Free Mixture-of-Experts
About this article
Abstract page for arXiv paper 2604.00801: Routing-Free Mixture-of-Experts
Computer Science > Machine Learning arXiv:2604.00801 (cs) [Submitted on 1 Apr 2026] Title:Routing-Free Mixture-of-Experts Authors:Yilun Liu, Jinru Han, Sikuan Yan, Volker Tresp, Yunpu Ma View a PDF of the paper titled Routing-Free Mixture-of-Experts, by Yilun Liu and 4 other authors View PDF HTML (experimental) Abstract:Standard Mixture-of-Experts (MoE) models rely on centralized routing mechanisms that introduce rigid inductive biases. We propose Routing-Free MoE which eliminates any hard-coded centralized designs including external routers, Softmax, Top-K and load balancing, instead encapsulating all activation functionalities within individual experts and directly optimized through continuous gradient flow, enabling each expert to determine its activation entirely on its own. We introduce a unified adaptive load-balancing framework to simultaneously optimize both expert-balancing and token-balancing objectives through a configurable interpolation, allowing flexible and customizable resource allocation. Extensive experiments show that Routing-Free MoE can consistently outperform baselines with better scalability and robustness. We analyze its behavior in detail and offer insights that may facilitate future MoE design ad optimization. Comments: Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) Cite as: arXiv:2604.00801 [cs.LG] (or arXiv:2604.00801v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2604.00801 ...