[2603.02667] DREAM: Where Visual Understanding Meets Text-to-Image Generation
About this article
Abstract page for arXiv paper 2603.02667: DREAM: Where Visual Understanding Meets Text-to-Image Generation
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.02667 (cs) [Submitted on 3 Mar 2026] Title:DREAM: Where Visual Understanding Meets Text-to-Image Generation Authors:Chao Li, Tianhong Li, Sai Vidyaranya Nuthalapati, Hong-You Chen, Satya Narayan Shukla, Yonghuan Yang, Jun Xiao, Xiangjun Fan, Aashu Singh, Dina Katabi, Shlok Kumar Mishra View a PDF of the paper titled DREAM: Where Visual Understanding Meets Text-to-Image Generation, by Chao Li and 10 other authors View PDF HTML (experimental) Abstract:Unifying visual representation learning and text-to-image (T2I) generation within a single model remains a central challenge in multimodal learning. We introduce DREAM, a unified framework that jointly optimizes discriminative and generative objectives, while learning strong visual representations. DREAM is built on two key techniques: During training, Masking Warmup, a progressive masking schedule, begins with minimal masking to establish the contrastive alignment necessary for representation learning, then gradually transitions to full masking for stable generative training. At inference, DREAM employs Semantically Aligned Decoding to align partially masked image candidates with the target text and select the best one for further decoding, improving text-image fidelity (+6.3%) without external rerankers. Trained solely on CC12M, DREAM achieves 72.7% ImageNet linear-probing accuracy (+1.1% over CLIP) and an FID of 4.25 (+6.2% over FLUID), with consistent gai...