[2604.02715] FluxMoE: Decoupling Expert Residency for High-Performance MoE Serving
About this article
Abstract page for arXiv paper 2604.02715: FluxMoE: Decoupling Expert Residency for High-Performance MoE Serving
Computer Science > Machine Learning arXiv:2604.02715 (cs) [Submitted on 3 Apr 2026] Title:FluxMoE: Decoupling Expert Residency for High-Performance MoE Serving Authors:Qingxiu Liu, Cyril Y. He, Hanser Jiang, Zion Wang, Alan Zhao, Patrick P. C. Lee View a PDF of the paper titled FluxMoE: Decoupling Expert Residency for High-Performance MoE Serving, by Qingxiu Liu and 5 other authors View PDF HTML (experimental) Abstract:Mixture-of-Experts (MoE) models have become a dominant paradigm for scaling large language models, but their rapidly growing parameter sizes introduce a fundamental inefficiency during inference: most expert weights remain idle in GPU memory while competing with performance-critical runtime state such as the key-value (KV) cache. Since KV cache capacity directly determines serving throughput, this mismatch leads to underutilized memory and degraded performance. In this paper, we present FluxMoE, a new MoE inference system that decouples expert parameters from persistent GPU residency. FluxMoE introduces an expert paging abstraction that treats expert weights as streamed, transient resources, materializing them on demand and evicting them immediately after use, allowing GPU memory to be preferentially allocated to throughput-critical runtime state. We implement FluxMoE atop vLLM to enable efficient MoE inference under severe memory constraints. Experimental results demonstrate that FluxMoE achieves up to 3.0$\times$ throughput gains over vLLM in memory-intens...