[2603.22283] End-to-End Training for Unified Tokenization and Latent Denoising
About this article
Abstract page for arXiv paper 2603.22283: End-to-End Training for Unified Tokenization and Latent Denoising
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.22283 (cs) [Submitted on 23 Mar 2026] Title:End-to-End Training for Unified Tokenization and Latent Denoising Authors:Shivam Duggal, Xingjian Bai, Zongze Wu, Richard Zhang, Eli Shechtman, Antonio Torralba, Phillip Isola, William T. Freeman View a PDF of the paper titled End-to-End Training for Unified Tokenization and Latent Denoising, by Shivam Duggal and 7 other authors View PDF HTML (experimental) Abstract:Latent diffusion models (LDMs) enable high-fidelity synthesis by operating in learned latent spaces. However, training state-of-the-art LDMs requires complex staging: a tokenizer must be trained first, before the diffusion model can be trained in the frozen latent space. We propose UNITE - an autoencoder architecture for unified tokenization and latent diffusion. UNITE consists of a Generative Encoder that serves as both image tokenizer and latent generator via weight sharing. Our key insight is that tokenization and generation can be viewed as the same latent inference problem under different conditioning regimes: tokenization infers latents from fully observed images, whereas generation infers them from noise together with text or class conditioning. Motivated by this, we introduce a single-stage training procedure that jointly optimizes both tasks via two forward passes through the same Generative Encoder. The shared parameters enable gradients to jointly shape the latent space, encouraging a "co...