[2603.04514] Progressive Refinement Regulation for Accelerating Diffusion Language Model Decoding
About this article
Abstract page for arXiv paper 2603.04514: Progressive Refinement Regulation for Accelerating Diffusion Language Model Decoding
Computer Science > Artificial Intelligence arXiv:2603.04514 (cs) [Submitted on 4 Mar 2026] Title:Progressive Refinement Regulation for Accelerating Diffusion Language Model Decoding Authors:Lipeng Wan, Jianhui Gu, Junjie Ma, Jianguo Huang, Shiguang Sun, Siyuan Li, Xuguang Lan View a PDF of the paper titled Progressive Refinement Regulation for Accelerating Diffusion Language Model Decoding, by Lipeng Wan and 6 other authors View PDF HTML (experimental) Abstract:Diffusion language models generate text through iterative denoising under a uniform refinement rule applied to all tokens. However, tokens stabilize at different rates in practice, leading to substantial redundant refinement and motivating refinement control over the denoising process. Existing approaches typically assess refinement necessity from instantaneous, step-level signals under a fixed decoding process. In contrast, whether a token has converged is defined by how its prediction changes along its future refinement trajectory. Moreover, changing the refinement rule reshapes future refinement trajectories, which in turn determine how refinement rules should be formulated, making refinement control inherently dynamic. We propose \emph{Progressive Refinement Regulation} (PRR), a progressive, trajectory-grounded refinement control framework that derives a token-level notion of empirical convergence progress from full decoding rollouts. Based on this signal, PRR learns a lightweight token-wise controller to regula...