[2510.05497] Patterns behind Chaos: Forecasting Data Movement for Efficient Large-Scale MoE LLM Inference
About this article
Abstract page for arXiv paper 2510.05497: Patterns behind Chaos: Forecasting Data Movement for Efficient Large-Scale MoE LLM Inference
Computer Science > Distributed, Parallel, and Cluster Computing arXiv:2510.05497 (cs) [Submitted on 7 Oct 2025 (v1), last revised 2 Apr 2026 (this version, v4)] Title:Patterns behind Chaos: Forecasting Data Movement for Efficient Large-Scale MoE LLM Inference Authors:Zhongkai Yu, Yue Guan, Zihao Yu, Chenyang Zhou, Zhengding Hu, Shuyi Pei, Yangwook Kang, Yufei Ding, Po-An Tsai View a PDF of the paper titled Patterns behind Chaos: Forecasting Data Movement for Efficient Large-Scale MoE LLM Inference, by Zhongkai Yu and 8 other authors View PDF HTML (experimental) Abstract:Large-scale Mixture of Experts (MoE) Large Language Models (LLMs) have recently become the frontier open weight models, achieving remarkable model capability similar to proprietary ones. But their random expert selection mechanism introduces significant data movement overhead that becomes the dominant bottleneck in multi-unit LLM serving systems. To understand the patterns underlying this data movement, we conduct comprehensive data-movement-centric profiling across four state-of-the-art large-scale MoE models released in 2025 (200B-1000B) using over 24,000 requests spanning diverse workloads. We perform systematic analysis from both temporal and spatial perspectives and distill six key insights to guide the design of diverse serving systems. We verify these insights on both future wafer-scale GPU architectures and existing GPU systems. On wafer-scale GPUs, lightweight architectural modifications guided by ...