[2604.03677] Unlocking Prompt Infilling Capability for Diffusion Language Models
About this article
Abstract page for arXiv paper 2604.03677: Unlocking Prompt Infilling Capability for Diffusion Language Models
Computer Science > Computation and Language arXiv:2604.03677 (cs) [Submitted on 4 Apr 2026] Title:Unlocking Prompt Infilling Capability for Diffusion Language Models Authors:Yoshinari Fujinuma, Keisuke Sakaguchi View a PDF of the paper titled Unlocking Prompt Infilling Capability for Diffusion Language Models, by Yoshinari Fujinuma and 1 other authors View PDF HTML (experimental) Abstract:Masked diffusion language models (dLMs) generate text through bidirectional denoising, yet this capability remains locked for infilling prompts. This limitation is an artifact of the current supervised finetuning (SFT) convention of applying response-only masking. To unlock this capability, we extend full-sequence masking during SFT, where both prompts and responses are masked jointly. Once unlocked, the model infills masked portions of a prompt template conditioned on few-shot examples. We show that such model-infilled prompts match or surpass manually designed templates, transfer effectively across models, and are complementary to existing prompt optimization methods. Our results suggest that training practices, not architectural limitations, are the primary bottleneck preventing masked diffusion language models from infilling effective prompts Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2604.03677 [cs.CL] (or arXiv:2604.03677v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2604.03677 Focus to learn more arXiv-issued DOI via ...