[2508.18672] Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks
About this article
Abstract page for arXiv paper 2508.18672: Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks
Computer Science > Machine Learning arXiv:2508.18672 (cs) [Submitted on 26 Aug 2025 (v1), last revised 1 Mar 2026 (this version, v3)] Title:Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks Authors:Taishi Nakamura, Satoki Ishikawa, Masaki Kawamura, Takumi Okamoto, Daisuke Nohara, Jun Suzuki, Rio Yokota View a PDF of the paper titled Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks, by Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota View PDF HTML (experimental) Abstract:Empirical scaling laws have driven the evolution of large language models (LLMs), yet their coefficients shift whenever the model architecture or data pipeline changes. Mixture-of-Experts (MoE) models, now standard in state-of-the-art systems, introduce a new sparsity dimension that current dense-model frontiers overlook. We investigate how MoE sparsity influences two distinct capability regimes: memorization skills and reasoning skills. By training MoE families that vary total parameters, active parameters, and top-$k$ routing under fixed compute budgets, we disentangle pre-training loss from downstream accuracy. Our results reveal two principles. First, Active FLOPs: models with identical training loss but greater active compute achieve higher reasoning accuracy. Second, Total tokens per parameter (TPP): memorization tasks improve with more parameters, while reasoning tasks benefit...