[2605.05225] MACS: Modality-Aware Capacity Scaling for Efficient Multimodal MoE Inference
About this article
Abstract page for arXiv paper 2605.05225: MACS: Modality-Aware Capacity Scaling for Efficient Multimodal MoE Inference
Computer Science > Machine Learning arXiv:2605.05225 (cs) [Submitted on 19 Apr 2026] Title:MACS: Modality-Aware Capacity Scaling for Efficient Multimodal MoE Inference Authors:Bo Li, Chuan Wu, shaolin Zhu View a PDF of the paper titled MACS: Modality-Aware Capacity Scaling for Efficient Multimodal MoE Inference, by Bo Li and Chuan Wu and shaolin Zhu View PDF HTML (experimental) Abstract:Mixture-of-Experts Multimodal Large Language Models (MoE MLLMs) suffer from a significant efficiency bottleneck during Expert Parallelism (EP) inference due to the straggler effect. This issue is worsened in the multimodal context, as existing token-count-based load balancing methods fail to address two unique challenges: (1) Information Heterogeneity, where numerous redundant visual tokens are treated equally to semantically critical ones, and (2) Modality Dynamics, where varying visual to text ratios across tasks lead to resource misallocation. To address these challenges, we propose MACS (Modality-Aware Capacity Scaling), a training-free inference framework. Specifically, MACS introduces an Entropy-Weighted Load mechanism to quantify the semantic value of visual tokens, addressing information heterogeneity. Additionally, the Dynamic Modality-Adaptive Capacity mechanism allocates expert resources based on the real-time modal composition of the input. Extensive experiments demonstrate that MACS significantly outperforms existing methods on various multimodal benchmarks, providing a novel a...