[2512.16975] InfoTok: Adaptive Discrete Video Tokenizer via Information-Theoretic Compression
About this article
Abstract page for arXiv paper 2512.16975: InfoTok: Adaptive Discrete Video Tokenizer via Information-Theoretic Compression
Computer Science > Computer Vision and Pattern Recognition arXiv:2512.16975 (cs) [Submitted on 18 Dec 2025 (v1), last revised 22 Mar 2026 (this version, v3)] Title:InfoTok: Adaptive Discrete Video Tokenizer via Information-Theoretic Compression Authors:Haotian Ye, Qiyuan He, Jiaqi Han, Puheng Li, Jiaojiao Fan, Zekun Hao, Fitsum Reda, Yogesh Balaji, Huayu Chen, Sheng Liu, Angela Yao, James Zou, Stefano Ermon, Haoxiang Wang, Ming-Yu Liu View a PDF of the paper titled InfoTok: Adaptive Discrete Video Tokenizer via Information-Theoretic Compression, by Haotian Ye and 14 other authors View PDF HTML (experimental) Abstract:Accurate and efficient discrete video tokenization is essential for long video sequences processing. Yet, the inherent complexity and variable information density of videos present a significant bottleneck for current tokenizers, which rigidly compress all content at a fixed rate, leading to redundancy or information loss. Drawing inspiration from Shannon's information theory, this paper introduces InfoTok, a principled framework for adaptive video tokenization. We rigorously prove that existing data-agnostic training methods are suboptimal in representation length, and present a novel evidence lower bound (ELBO)-based algorithm that approaches theoretical optimality. Leveraging this framework, we develop a transformer-based adaptive compressor that enables adaptive tokenization. Empirical results demonstrate state-of-the-art compression performance, saving 20...