[2602.19066] IDLM: Inverse-distilled Diffusion Language Models
Summary
The paper presents Inverse-distilled Diffusion Language Models (IDLM), a method that significantly accelerates inference in text generation by reducing sampling steps while maintaining model performance.
Why It Matters
As diffusion models gain traction in natural language processing, IDLM addresses the critical issue of slow inference speeds, making these models more practical for real-world applications. This research contributes to the efficiency of generative AI, which is increasingly relevant in various industries.
Key Takeaways
- IDLM reduces inference steps by 4x-64x compared to traditional diffusion models.
- The method ensures valid optimization through a unique solution in the inverse formulation.
- Gradient-stable relaxations are introduced to facilitate effective training in discrete spaces.
- The approach preserves the generative perplexity and entropy of the teacher model.
- This advancement enhances the practicality of diffusion models for text generation tasks.
Computer Science > Machine Learning arXiv:2602.19066 (cs) [Submitted on 22 Feb 2026] Title:IDLM: Inverse-distilled Diffusion Language Models Authors:David Li, Nikita Gushchin, Dmitry Abulkhanov, Eric Moulines, Ivan Oseledets, Maxim Panov, Alexander Korotin View a PDF of the paper titled IDLM: Inverse-distilled Diffusion Language Models, by David Li and 6 other authors View PDF HTML (experimental) Abstract:Diffusion Language Models (DLMs) have recently achieved strong results in text generation. However, their multi-step sampling leads to slow inference, limiting practical use. To address this, we extend Inverse Distillation, a technique originally developed to accelerate continuous diffusion models, to the discrete setting. Nonetheless, this extension introduces both theoretical and practical challenges. From a theoretical perspective, the inverse distillation objective lacks uniqueness guarantees, which may lead to suboptimal solutions. From a practical standpoint, backpropagation in the discrete space is non-trivial and often unstable. To overcome these challenges, we first provide a theoretical result demonstrating that our inverse formulation admits a unique solution, thereby ensuring valid optimization. We then introduce gradient-stable relaxations to support effective training. As a result, experiments on multiple DLMs show that our method, Inverse-distilled Diffusion Language Models (IDLM), reduces the number of inference steps by 4x-64x, while preserving the teache...