[2604.02718] Generative Frontiers: Why Evaluation Matters for Diffusion Language Models
About this article
Abstract page for arXiv paper 2604.02718: Generative Frontiers: Why Evaluation Matters for Diffusion Language Models
Computer Science > Machine Learning arXiv:2604.02718 (cs) [Submitted on 3 Apr 2026] Title:Generative Frontiers: Why Evaluation Matters for Diffusion Language Models Authors:Patrick Pynadath, Jiaxin Shi, Ruqi Zhang View a PDF of the paper titled Generative Frontiers: Why Evaluation Matters for Diffusion Language Models, by Patrick Pynadath and 2 other authors View PDF HTML (experimental) Abstract:Diffusion language models have seen exciting recent progress, offering far more flexibility in generative trajectories than autoregressive models. This flexibility has motivated a growing body of research into new approaches to diffusion language modeling, which typically begins at the scale of GPT-2 small (150 million parameters). However, these advances introduce new issues with evaluation methodology. In this technical note, we discuss the limitations of current methodology and propose principled augmentations to ensure reliable comparisons. We first discuss why OpenWebText has become the standard benchmark, and why alternatives such as LM1B are inherently less meaningful. We then discuss the limitations of likelihood evaluations for diffusion models, and explain why relying on generative perplexity alone as a metric can lead to uninformative results. To address this, we show that generative perplexity and entropy are two components of the KL divergence to a reference distribution. This decomposition explains generative perplexity's sensitivity to entropy, and naturally suggests...