[2510.05725] Improving Discrete Diffusion Unmasking Policies Beyond Explicit Reference Policies

[2510.05725] Improving Discrete Diffusion Unmasking Policies Beyond Explicit Reference Policies

arXiv - Machine Learning 4 min read Article

Summary

This article presents a novel approach to improving masked diffusion models (MDMs) for language modeling by introducing a learned scheduler that outperforms traditional heuristic methods.

Why It Matters

The study addresses the limitations of existing unmasking strategies in MDMs, which are crucial for generating coherent language. By optimizing the unmasking process through a learned policy, the research contributes to advancements in natural language processing, enhancing model performance and applicability.

Key Takeaways

  • Introduces a learned scheduler for masked diffusion models (MDMs).
  • Demonstrates significant performance improvements over heuristic methods.
  • Proves that optimized policies yield samples closer to the data distribution.
  • Empirical results show a 20.1% gain in specific benchmarks.
  • Provides code access for further experimentation and validation.

Computer Science > Machine Learning arXiv:2510.05725 (cs) [Submitted on 7 Oct 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:Improving Discrete Diffusion Unmasking Policies Beyond Explicit Reference Policies Authors:Chunsan Hong, Seonho An, Min-Soo Kim, Jong Chul Ye View a PDF of the paper titled Improving Discrete Diffusion Unmasking Policies Beyond Explicit Reference Policies, by Chunsan Hong and 3 other authors View PDF HTML (experimental) Abstract:Masked diffusion models (MDMs) have recently emerged as a novel framework for language modeling. MDMs generate sentences by iteratively denoising masked sequences, filling in [MASK] tokens step by step. Although MDMs support any-order sampling, performance is highly sensitive to the choice of which position to unmask next. Prior work typically relies on rule-based schedules (e.g., max-confidence, max-margin), which provide ad hoc improvements. In contrast, we replace these heuristics with a learned scheduler. Specifically, we cast denoising as a KL-regularized Markov decision process (MDP) with an explicit reference policy and optimize a regularized objective that admits policy improvement and convergence guarantees under standard assumptions. We prove that the optimized policy under this framework generates samples that more closely match the data distribution than heuristic schedules. Empirically, across four benchmarks, our learned policy consistently outperforms max-confidence: for example, on SUDOKU, where...

Related Articles

Llms

[P] I trained a language model from scratch for a low resource language and got it running fully on-device on Android (no GPU, demo)

Hi Everybody! I just wanted to share an update on a project I’ve been working on called BULaMU, a family of language models trained (20M,...

Reddit - Machine Learning · 1 min ·
Paper Finds That Leading AI Chatbots Like ChatGPT and Claude Remain Incredibly Sycophantic, Resulting in Twisted Effects on Users
Llms

Paper Finds That Leading AI Chatbots Like ChatGPT and Claude Remain Incredibly Sycophantic, Resulting in Twisted Effects on Users

A study found that sycophancy is pervasive among chatbots, and that bots are more likely than human peers to affirm a person's bad behavior.

AI Tools & Products · 6 min ·
Popular AI gateway startup LiteLLM ditches controversial startup Delve | TechCrunch
Llms

Popular AI gateway startup LiteLLM ditches controversial startup Delve | TechCrunch

LiteLLM had obtained two security compliance certifications via Delve and fell victim to some horrific credential-stealing malware last w...

TechCrunch - AI · 3 min ·
Llms

Von Hammerstein’s Ghost: What a Prussian General’s Officer Typology Can Teach Us About AI Misalignment

Greetings all - I've posted mostly in r/claudecode and r/aigamedev a couple of times previously. Working with CC for personal projects re...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime