[2603.22304] Mitigating Premature Discretization with Progressive Quantization for Robust Vector Tokenization
About this article
Abstract page for arXiv paper 2603.22304: Mitigating Premature Discretization with Progressive Quantization for Robust Vector Tokenization
Computer Science > Machine Learning arXiv:2603.22304 (cs) [Submitted on 17 Mar 2026] Title:Mitigating Premature Discretization with Progressive Quantization for Robust Vector Tokenization Authors:Wenhao Zhao, Qiran Zou, Zhouhan Lin, Dianbo Liu View a PDF of the paper titled Mitigating Premature Discretization with Progressive Quantization for Robust Vector Tokenization, by Wenhao Zhao and 3 other authors View PDF HTML (experimental) Abstract:Vector Quantization (VQ) has become the cornerstone of tokenization for many multimodal Large Language Models and diffusion synthesis. However, existing VQ paradigms suffer from a fundamental conflict: they enforce discretization before the encoder has captured the underlying data manifold. We term this phenomenon Premature Discretization. To resolve this, we propose Progressive Quantization (ProVQ), which incorporates the dynamics of quantization hardness as a fundamental yet previously overlooked axis in VQ training. By treating quantization as a curriculum that smoothly anneals from a continuous latent space to a discrete one, ProVQ effectively guides the codebook toward the well-expanded manifolds. Extensive experimental results demonstrate the broad effectiveness of ProVQ across diverse modalities. We report improved reconstruction and generative performance on the ImageNet-1K and ImageNet-100 benchmarks, highlighting the ProVQ's boost for generative modeling. Furthermore, ProVQ proves highly effective for modeling complex biologi...