[2604.03537] Rethinking Token Prediction: Tree-Structured Diffusion Language Model
About this article
Abstract page for arXiv paper 2604.03537: Rethinking Token Prediction: Tree-Structured Diffusion Language Model
Computer Science > Computation and Language arXiv:2604.03537 (cs) [Submitted on 4 Apr 2026] Title:Rethinking Token Prediction: Tree-Structured Diffusion Language Model Authors:Zihao Wu, Haoming Yang, Juncheng Dong, Vahid Tarokh View a PDF of the paper titled Rethinking Token Prediction: Tree-Structured Diffusion Language Model, by Zihao Wu and 3 other authors View PDF HTML (experimental) Abstract:Discrete diffusion language models have emerged as a competitive alternative to auto-regressive language models, but training them efficiently under limited parameter and memory budgets remains challenging. Modern architectures are predominantly based on a full-vocabulary token prediction layer, which accounts for a substantial fraction of model parameters (e.g., more than 20% in small scale DiT-style designs) and often dominates peak GPU memory usage. This leads to inefficient use of both parameters and memory under constrained training resources. To address this issue, we revisit the necessity of explicit full-vocabulary prediction, and instead exploit the inherent structure among tokens to build a tree-structured diffusion language model. Specifically, we model the diffusion process with intermediate latent states corresponding to a token's ancestor nodes in a pre-constructed vocabulary tree. This tree-structured factorization exponentially reduces the classification dimensionality, makes the prediction head negligible in size, and enables reallocation of parameters to deepen t...