[2605.07355] TTF: Temporal Token Fusion for Efficient Video-Language Model
About this article
Abstract page for arXiv paper 2605.07355: TTF: Temporal Token Fusion for Efficient Video-Language Model
Computer Science > Computer Vision and Pattern Recognition arXiv:2605.07355 (cs) [Submitted on 8 May 2026] Title:TTF: Temporal Token Fusion for Efficient Video-Language Model Authors:Simin Huo, Ning LI View a PDF of the paper titled TTF: Temporal Token Fusion for Efficient Video-Language Model, by Simin Huo and Ning LI View PDF HTML (experimental) Abstract:Video-language models (VLMs) face rapid inference costs as visual token counts scale with video length. For example, 32 frames at $448{\times}448$ resolution already yield >8,000 visual tokens in Qwen3-VL, making LLM prefill the dominant throughput bottleneck. Existing methods often rely on global similarity or attention-guided compression, incurring offsets to their gains. We propose \textbf{Temporal Token Fusion (TTF)}, a training-free, plug-and-play pre-LLM token compression framework that exploits structured temporal redundancy in video. TTF automatically selects an anchor frame, then for each subsequent frame, performs a local window similarity search (e.g.,$3\times 3$), fusing tokens that exceed a threshold. The compressed sequence maintains positional consistency across both prefill and decoding through coordinate realignment, enabling seamless integration with existing VLM pipelines. On Qwen3-VL-8B with threshold t=0.70, TTF removes about 67\% of visual tokens while retaining 99.5\% of the baseline accuracy and introducing only ${\approx}0.16$\,GFLOPs of matching overhead. Overall, TTF offers a practical, efficie...