[2603.22216] Gumbel Distillation for Parallel Text Generation
About this article
Abstract page for arXiv paper 2603.22216: Gumbel Distillation for Parallel Text Generation
Computer Science > Computation and Language arXiv:2603.22216 (cs) [Submitted on 23 Mar 2026] Title:Gumbel Distillation for Parallel Text Generation Authors:Chi Zhang, Xixi Hu, Bo Liu, Qiang Liu View a PDF of the paper titled Gumbel Distillation for Parallel Text Generation, by Chi Zhang and 2 other authors View PDF HTML (experimental) Abstract:The slow, sequential nature of autoregressive (AR) language models has driven the adoption of parallel decoding methods. However, these non-AR models often sacrifice generation quality as they struggle to model the complex joint distribution of token sequences. To narrow this performance gap, we introduce Gumbel Distillation, a novel distillation technique that enables parallel decoders to learn this distribution effectively. Our method leverages the Gumbel-Max trick to create a deterministic mapping from a latent Gumbel noise space to the output tokens of a high-performing AR teacher. As a model-agnostic technique, Gumbel Distillation seamlessly integrates with diverse parallel decoding architectures, including MDLM and BD3-LM. Experiments on LM1B and OpenWebText show that Gumbel Distillation substantially improves the generation quality of parallel language models, achieving a 30.0% improvement in MAUVE score and 10.5% in generative perplexity over MDLM trained on OpenWebText dataset. Code available at this https URL. Comments: Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG) Cite as: arXiv:2603.22216 [cs.CL] ...