[2510.18087] Planned Diffusion
About this article
Abstract page for arXiv paper 2510.18087: Planned Diffusion
Computer Science > Artificial Intelligence arXiv:2510.18087 (cs) [Submitted on 20 Oct 2025 (v1), last revised 25 Mar 2026 (this version, v2)] Title:Planned Diffusion Authors:Daniel Israel, Tian Jin, Ellie Cheng, Guy Van den Broeck, Aditya Grover, Suvinay Subramanian, Michael Carbin View a PDF of the paper titled Planned Diffusion, by Daniel Israel and 6 other authors View PDF Abstract:Most large language models are autoregressive: they generate tokens one at a time. Discrete diffusion language models can generate multiple tokens in parallel, but sampling from them requires a denoising order: a strategy for deciding which tokens to decode at each step. Determining a good denoising order is difficult, and existing approaches use heuristics that create a steep trade-off between quality and latency. We propose planned diffusion, a system that trains the model to determine its own denoising order. Planned diffusion uses a single model that transitions between autoregressive and diffusion-based generation: first, the model autoregressively generates a plan that partitions the response into semantically independent chunks; second, the model denoises all chunks in parallel. The autoregressive plan enables the model to define the denoising order itself. On AlpacaEval, planned diffusion achieves 1.27x to 1.81x speedup over autoregressive generation with only 0.87% to 5.4% drop in win rate, establishing a new Pareto frontier for parallel generation with discrete diffusion. Additional...