[2506.01928] Esoteric Language Models: Bridging Autoregressive and Masked Diffusion LLMs
Summary
The paper introduces Eso-LMs, a novel language model that integrates autoregressive and masked diffusion paradigms, enhancing inference efficiency and generation speed while maintaining quality.
Why It Matters
This research addresses the limitations of existing language models by combining the strengths of autoregressive and masked diffusion approaches. The advancements in inference efficiency and speed are crucial for applications in natural language processing, making it relevant for developers and researchers in AI and machine learning.
Key Takeaways
- Eso-LMs fuse autoregressive and masked diffusion models to improve performance.
- The model achieves significant speed improvements in inference, outperforming standard masked diffusion models.
- Introduces KV caching for masked diffusion models, enhancing efficiency without sacrificing parallel generation.
Computer Science > Computation and Language arXiv:2506.01928 (cs) [Submitted on 2 Jun 2025 (v1), last revised 21 Feb 2026 (this version, v3)] Title:Esoteric Language Models: Bridging Autoregressive and Masked Diffusion LLMs Authors:Subham Sekhar Sahoo, Zhihan Yang, Yash Akhauri, Johnna Liu, Deepansha Singh, Zhoujun Cheng, Zhengzhong Liu, Eric Xing, John Thickstun, Arash Vahdat View a PDF of the paper titled Esoteric Language Models: Bridging Autoregressive and Masked Diffusion LLMs, by Subham Sekhar Sahoo and 9 other authors View PDF Abstract:Diffusion-based language models offer a compelling alternative to autoregressive (AR) models by enabling parallel and controllable generation. Within this family, Masked Diffusion Models (MDMs) currently perform best but still underperform AR models in perplexity and lack key inference-time efficiency features, most notably KV caching. We introduce Eso-LMs, a new family of models that fuses AR and MDM paradigms, smoothly interpolating between their perplexities while overcoming their respective limitations. Unlike prior work, which uses transformers with bidirectional attention as MDM denoisers, we exploit the connection between MDMs and Any-Order autoregressive models and adopt causal attention. This design lets us compute the exact likelihood of MDMs for the first time and, crucially, enables us \to introduce KV caching for MDMs while preserving parallel generation for the first time, significantly improving inference efficiency. Co...