[2603.02547] CoDAR: Continuous Diffusion Language Models are More Powerful Than You Think
About this article
Abstract page for arXiv paper 2603.02547: CoDAR: Continuous Diffusion Language Models are More Powerful Than You Think
Computer Science > Computation and Language arXiv:2603.02547 (cs) [Submitted on 3 Mar 2026] Title:CoDAR: Continuous Diffusion Language Models are More Powerful Than You Think Authors:Junzhe Shen, Jieru Zhao, Ziwei He, Zhouhan Lin View a PDF of the paper titled CoDAR: Continuous Diffusion Language Models are More Powerful Than You Think, by Junzhe Shen and 3 other authors View PDF HTML (experimental) Abstract:We study why continuous diffusion language models (DLMs) have lagged behind discrete diffusion approaches despite their appealing continuous generative dynamics. Under a controlled token--recovery study, we identify token rounding, the final projection from denoised embeddings to tokens, as a primary bottleneck. Building on these insights, we propose CoDAR (Continuous Diffusion with Contextual AutoRegressive Decoder), a two--stage framework that keeps diffusion entirely continuous in an embedding space while learning a strong, context--conditional discretizer: an autoregressive Transformer decoder that cross--attends to the denoised embedding sequence and performs contextualized rounding to tokens. Experiments on LM1B and OpenWebText demonstrate that CoDAR substantially improves generation quality over latent diffusion and becomes competitive with strong discrete DLMs, while exposing a simple decoder--temperature knob to navigate the fluency--diversity trade off. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite ...