[2603.02760] Efficient Self-Evaluation for Diffusion Language Models via Sequence Regeneration
About this article
Abstract page for arXiv paper 2603.02760: Efficient Self-Evaluation for Diffusion Language Models via Sequence Regeneration
Computer Science > Computation and Language arXiv:2603.02760 (cs) [Submitted on 3 Mar 2026] Title:Efficient Self-Evaluation for Diffusion Language Models via Sequence Regeneration Authors:Linhao Zhong, Linyu Wu, Wen Wang, Yuling Xi, Chenchen Jing, Jiaheng Zhang, Hao Chen, Chunhua Shen View a PDF of the paper titled Efficient Self-Evaluation for Diffusion Language Models via Sequence Regeneration, by Linhao Zhong and 7 other authors View PDF HTML (experimental) Abstract:Diffusion large language models (dLLMs) have recently attracted significant attention for their ability to enhance diversity, controllability, and parallelism. However, their non-sequential, bidirectionally masked generation makes quality assessment difficult, underscoring the need for effective self-evaluation. In this work, we propose DiSE, a simple yet effective self-evaluation confidence quantification method for dLLMs. DiSE quantifies confidence by computing the probability of regenerating the tokens in the entire generated sequence, given the full context. This method enables more efficient and reliable quality assessment by leveraging token regeneration probabilities, facilitating both likelihood estimation and robust uncertainty quantification. Building upon DiSE, we further introduce a flexible-length generation framework, which adaptively controls the sequence length based on the model's self-assessment of its own output. We analyze and validate the feasibility of DiSE from the perspective of dLLM ...