[2602.22661] dLLM: Simple Diffusion Language Modeling
Summary
The paper introduces dLLM, an open-source framework for diffusion language modeling that standardizes core components, facilitating reproducibility and flexibility for researchers and developers.
Why It Matters
As diffusion language models gain traction, the need for a unified framework becomes critical. dLLM addresses the challenges of disparate codebases and enhances accessibility, enabling researchers to build, fine-tune, and evaluate models efficiently. This can accelerate advancements in the field of natural language processing.
Key Takeaways
- dLLM provides a standardized framework for diffusion language modeling.
- The framework supports easy customization for new methods and architectures.
- Users can reproduce, fine-tune, and evaluate large DLMs through a unified pipeline.
- Minimal recipes for building small DLMs are included to enhance accessibility.
- Checkpoints of small DLMs are released to accelerate future research.
Computer Science > Computation and Language arXiv:2602.22661 (cs) [Submitted on 26 Feb 2026] Title:dLLM: Simple Diffusion Language Modeling Authors:Zhanhui Zhou, Lingjie Chen, Hanghang Tong, Dawn Song View a PDF of the paper titled dLLM: Simple Diffusion Language Modeling, by Zhanhui Zhou and 3 other authors View PDF HTML (experimental) Abstract:Although diffusion language models (DLMs) are evolving quickly, many recent models converge on a set of shared components. These components, however, are distributed across ad-hoc research codebases or lack transparent implementations, making them difficult to reproduce or extend. As the field accelerates, there is a clear need for a unified framework that standardizes these common components while remaining flexible enough to support new methods and architectures. To address this gap, we introduce dLLM, an open-source framework that unifies the core components of diffusion language modeling -- training, inference, and evaluation -- and makes them easy to customize for new designs. With dLLM, users can reproduce, finetune, deploy, and evaluate open-source large DLMs such as LLaDA and Dream through a standardized pipeline. The framework also provides minimal, reproducible recipes for building small DLMs from scratch with accessible compute, including converting any BERT-style encoder or autoregressive LM into a DLM. We also release the checkpoints of these small DLMs to make DLMs more accessible and accelerate future research. Comme...