[2603.25872] DRiffusion: Draft-and-Refine Process Parallelizes Diffusion Models with Ease
About this article
Abstract page for arXiv paper 2603.25872: DRiffusion: Draft-and-Refine Process Parallelizes Diffusion Models with Ease
Computer Science > Machine Learning arXiv:2603.25872 (cs) [Submitted on 26 Mar 2026] Title:DRiffusion: Draft-and-Refine Process Parallelizes Diffusion Models with Ease Authors:Runsheng Bai, Chengyu Zhang, Yangdong Deng View a PDF of the paper titled DRiffusion: Draft-and-Refine Process Parallelizes Diffusion Models with Ease, by Runsheng Bai and 2 other authors View PDF HTML (experimental) Abstract:Diffusion models have achieved remarkable success in generating high-fidelity content but suffer from slow, iterative sampling, resulting in high latency that limits their use in interactive applications. We introduce DRiffusion, a parallel sampling framework that parallelizes diffusion inference through a draft-and-refine process. DRiffusion employs skip transitions to generate multiple draft states for future timesteps and computes their corresponding noises in parallel, which are then used in the standard denoising process to produce refined results. Theoretically, our method achieves an acceleration rate of $\tfrac{1}{n}$ or $\tfrac{2}{n+1}$, depending on whether the conservative or aggressive mode is used, where $n$ denotes the number of devices. Empirically, DRiffusion attains 1.4$\times$-3.7$\times$ speedup across multiple diffusion models while incur minimal degradation in generation quality: on MS-COCO dataset, both FID and CLIP remain largely on par with those of the original model, while PickScore and HPSv2.1 show only minor average drops of 0.17 and 0.43, respectivel...