[2603.28696] AdaptToken: Entropy-based Adaptive Token Selection for MLLM Long Video Understanding
About this article
Abstract page for arXiv paper 2603.28696: AdaptToken: Entropy-based Adaptive Token Selection for MLLM Long Video Understanding
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.28696 (cs) [Submitted on 30 Mar 2026] Title:AdaptToken: Entropy-based Adaptive Token Selection for MLLM Long Video Understanding Authors:Haozhe Qi, Kevin Qu, Mahdi Rad, Rui Wang, Alexander Mathis, Marc Pollefeys View a PDF of the paper titled AdaptToken: Entropy-based Adaptive Token Selection for MLLM Long Video Understanding, by Haozhe Qi and 5 other authors View PDF Abstract:Long video understanding remains challenging for Multi-modal Large Language Models (MLLMs) due to high memory costs and context-length limits. Prior approaches mitigate this by scoring and selecting frames/tokens within short clips, but they lack a principled mechanism to (i) compare relevance across distant video clips and (ii) stop processing once sufficient evidence has been gathered. We propose AdaptToken, a training-free framework that turns an MLLM's self-uncertainty into a global control signal for long-video token selection. AdaptToken splits a video into groups, extracts cross-modal attention to rank tokens within each group, and uses the model's response entropy to estimate each group's prompt relevance. This entropy signal enables a global token budget allocation across groups and further supports early stopping (AdaptToken-Lite), skipping the remaining groups when the model becomes sufficiently certain. Across four long-video benchmarks (VideoMME, LongVideoBench, LVBench, and MLVU) and multiple base MLLMs (7B-72B), Adap...