[2603.03681] EvoPrune: Early-Stage Visual Token Pruning for Efficient MLLMs
About this article
Abstract page for arXiv paper 2603.03681: EvoPrune: Early-Stage Visual Token Pruning for Efficient MLLMs
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.03681 (cs) [Submitted on 4 Mar 2026] Title:EvoPrune: Early-Stage Visual Token Pruning for Efficient MLLMs Authors:Yuhao Chen, Bin Shan, Xin Ye, Cheng Chen View a PDF of the paper titled EvoPrune: Early-Stage Visual Token Pruning for Efficient MLLMs, by Yuhao Chen and 3 other authors View PDF HTML (experimental) Abstract:Multimodal Large Language Models (MLLMs) have shown strong performance in vision-language tasks, but their inference efficiency is severely limited by the exponential growth of visual tokens in complex scenarios such as high-resolution images and videos. Existing visual token pruning methods mainly operate after visual encoding, overlooking the substantial computational cost incurred during the encoding stage. To address this issue, we propose EvoPrune, an early-stage visual token pruning method for MLLMs that performs pruning directly during visual encoding. Specifically, EvoPrune employs a layer-wise pruning strategy guided by token similarity, diversity, and attention-based importance to retain the most informative visual tokens at selected encoding layers. Extensive experiments on image and video benchmarks validate the effectiveness of EvoPrune. In particular, on the VideoMME dataset, EvoPrune achieves 2$\times$ inference speedup with less than 1% performance degradation, demonstrating its potential for latency-sensitive MLLM deployment. Comments: Subjects: Computer Vision and Patter...