[2510.13870] Unlocking the Potential of Diffusion Language Models through Template Infilling
About this article
Abstract page for arXiv paper 2510.13870: Unlocking the Potential of Diffusion Language Models through Template Infilling
Computer Science > Computation and Language arXiv:2510.13870 (cs) [Submitted on 13 Oct 2025 (v1), last revised 7 Apr 2026 (this version, v2)] Title:Unlocking the Potential of Diffusion Language Models through Template Infilling Authors:Junhoo Lee, Seungyeon Kim, Nojun Kwak View a PDF of the paper titled Unlocking the Potential of Diffusion Language Models through Template Infilling, by Junhoo Lee and 2 other authors View PDF HTML (experimental) Abstract:Diffusion Language Models (DLMs) have emerged as a promising alternative to Autoregressive Language Models, yet their inference strategies remain limited to prefix-based prompting inherited from the autoregressive paradigm. In this paper, we propose Template Infilling (TI), a tailored conditioning methodology for DLMs. Unlike conventional prefix prompting, TI flexibly aligns structural anchors across the entire target response space, establishing a global blueprint before filling in the masked segments. We demonstrate the effectiveness of our approach on diverse benchmarks, including mathematical reasoning, code generation, and trip planning, achieving consistent improvements of 9.40% over the baseline. Furthermore, we observe that TI provides additional advantages in multi-token generation settings, enabling effective speedup while maintaining generation quality and robustness. By enforcing these global constraints, TI ultimately facilitates System-2 reasoning, empowering the model to deliberate within a structurally defin...