[2505.24205] On the Expressive Power of Mixture-of-Experts for Structured Complex Tasks
Summary
This paper explores the expressive power of Mixture-of-Experts (MoEs) in modeling complex tasks, demonstrating their efficiency in approximating functions supported on low-dimensional manifolds and structured tasks with compositional sparsity.
Why It Matters
Understanding the theoretical foundations of Mixture-of-Experts networks is crucial as they are increasingly used in deep learning. This research provides insights into their architectural components and hyperparameters, which can guide future developments in machine learning models and applications.
Key Takeaways
- MoEs can efficiently approximate functions on low-dimensional manifolds, addressing the curse of dimensionality.
- Deep MoEs can represent an exponential number of structured tasks through compositional sparsity.
- Critical components like gating mechanisms and the number of experts significantly influence MoE performance.
- The study offers natural suggestions for MoE variants based on architectural analysis.
- This research enhances the theoretical understanding of MoEs, paving the way for improved applications in AI.
Computer Science > Machine Learning arXiv:2505.24205 (cs) [Submitted on 30 May 2025 (v1), last revised 18 Feb 2026 (this version, v2)] Title:On the Expressive Power of Mixture-of-Experts for Structured Complex Tasks Authors:Mingze Wang, Weinan E View a PDF of the paper titled On the Expressive Power of Mixture-of-Experts for Structured Complex Tasks, by Mingze Wang and Weinan E View PDF HTML (experimental) Abstract:Mixture-of-experts networks (MoEs) have demonstrated remarkable efficiency in modern deep learning. Despite their empirical success, the theoretical foundations underlying their ability to model complex tasks remain poorly understood. In this work, we conduct a systematic study of the expressive power of MoEs in modeling complex tasks with two common structural priors: low-dimensionality and sparsity. For shallow MoEs, we prove that they can efficiently approximate functions supported on low-dimensional manifolds, overcoming the curse of dimensionality. For deep MoEs, we show that $\mathcal{O}(L)$-layer MoEs with $E$ experts per layer can approximate piecewise functions comprising $E^L$ pieces with compositional sparsity, i.e., they can exhibit an exponential number of structured tasks. Our analysis reveals the roles of critical architectural components and hyperparameters in MoEs, including the gating mechanism, expert networks, the number of experts, and the number of layers, and offers natural suggestions for MoE variants. Comments: Subjects: Machine Learning...