[2506.10848] Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Principles
About this article
Abstract page for arXiv paper 2506.10848: Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Principles
Computer Science > Computation and Language arXiv:2506.10848 (cs) [Submitted on 12 Jun 2025 (v1), last revised 31 Mar 2026 (this version, v3)] Title:Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Principles Authors:Qingyan Wei, Yaojie Zhang, Zhiyuan Liu, Puyu Zeng, Yuxuan Wang, Biqing Qi, Dongrui Liu, Linfeng Zhang View a PDF of the paper titled Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Principles, by Qingyan Wei and 7 other authors View PDF HTML (experimental) Abstract:Diffusion-based language models (dLLMs) have emerged as a promising alternative to traditional autoregressive LLMs by enabling parallel token generation and significantly reducing inference latency. However, existing sampling strategies for dLLMs, such as confidence-based or semi-autoregressive decoding, often suffer from static behavior, leading to suboptimal efficiency and limited flexibility. In this paper, we propose SlowFast Sampling, a novel dynamic sampling strategy that adaptively alternates between exploratory and accelerated decoding stages. Our method is guided by three golden principles: certainty principle, convergence principle, and positional principle, which govern when and where tokens can be confidently and efficiently decoded. We further integrate our strategy with dLLM-Cache to reduce redundant computation. Extensive experiments across benchmarks and models show that SlowFast Sampling achieves up to 15.63$...