[2603.00198] Stateful Token Reduction for Long-Video Hybrid VLMs
About this article
Abstract page for arXiv paper 2603.00198: Stateful Token Reduction for Long-Video Hybrid VLMs
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.00198 (cs) [Submitted on 27 Feb 2026] Title:Stateful Token Reduction for Long-Video Hybrid VLMs Authors:Jindong Jiang, Amala Sanjay Deshmukh, Kateryna Chumachenko, Karan Sapra, Zhiding Yu, Guilin Liu, Andrew Tao, Pavlo Molchanov, Jan Kautz, Wonmin Byeon View a PDF of the paper titled Stateful Token Reduction for Long-Video Hybrid VLMs, by Jindong Jiang and 9 other authors View PDF HTML (experimental) Abstract:Token reduction is an effective way to accelerate long-video vision-language models (VLMs), but most existing methods are designed for dense Transformers and do not directly account for hybrid architectures that interleave attention with linear-time state-space blocks (e.g., Mamba). We study query-conditioned token reduction for hybrid video VLMs and analyze reduction behavior through two properties: layerwise sparsity (how many tokens capture query-relevant information) and importance stability (whether token-importance rankings persist across depth). Although token importance is sparse within each layer, the set of important tokens changes across layers, so aggressive early pruning is unreliable. Motivated by this, we propose a low-to-high progressive reduction schedule and a unified language-aware scoring mechanism for both attention and Mamba blocks (using an implicit-attention proxy for Mamba), enabling all-layer token reduction in hybrids. Under an aggressive compression setting (retaining 25%...