[2602.06412] Stopping Computation for Converged Tokens in Masked Diffusion-LM Decoding

[2602.06412] Stopping Computation for Converged Tokens in Masked Diffusion-LM Decoding

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2602.06412: Stopping Computation for Converged Tokens in Masked Diffusion-LM Decoding

Computer Science > Computation and Language arXiv:2602.06412 (cs) [Submitted on 6 Feb 2026 (v1), last revised 4 Mar 2026 (this version, v2)] Title:Stopping Computation for Converged Tokens in Masked Diffusion-LM Decoding Authors:Daisuke Oba, Danushka Bollegala, Masahiro Kaneko, Naoaki Okazaki View a PDF of the paper titled Stopping Computation for Converged Tokens in Masked Diffusion-LM Decoding, by Daisuke Oba and Danushka Bollegala and Masahiro Kaneko and Naoaki Okazaki View PDF HTML (experimental) Abstract:Masked Diffusion Language Models generate sequences via iterative sampling that progressively unmasks tokens. However, they still recompute the attention and feed-forward blocks for every token position at every step -- even when many unmasked tokens are essentially fixed, resulting in substantial waste in compute. We propose SureLock: when the posterior at an unmasked position has stabilized across steps (our sure condition), we lock that position -- thereafter skipping its query projection and feed-forward sublayers -- while caching its attention keys and values so other positions can continue to attend to it. This reduces the dominant per-iteration computational cost from $O(N^2d)$ to $O(MNd)$ where $N$ is the sequence length, $M$ is the number of unlocked token positions, and $d$ is the model dimension. In practice, $M$ decreases as the iteration progresses, yielding substantial savings. On LLaDA-8B, SureLock reduces algorithmic FLOPs by 30--50% relative to the sa...

Originally published on March 05, 2026. Curated by AI News.

Related Articles

Llms

Nobody’s talking about what Pixar’s Hoppers is actually saying about AI

Just watched Hoppers and I’m surprised this hasn’t been picked up more widely. The parallels with AI and its risks are hard to ignore onc...

Reddit - Artificial Intelligence · 1 min ·
Llms

ChatGPT Critiques My Approach to AI

I uploaded VulcanAMI into ChatGPT and had it to a deep analysis. I then asked one simple question: What would be the result of wider adop...

Reddit - Artificial Intelligence · 1 min ·
Llms

HALO - Hierarchical Autonomous Learning Organism

The idea is called HALO - Hierarchical Autonomous Learning Organism. The core premise is simple: what if instead of just making LLMs bigg...

Reddit - Artificial Intelligence · 1 min ·
Llms

[Project] PentaNet: Pushing beyond BitNet with Native Pentanary {-2, -1, 0, 1, 2} Quantization (124M, zero-multiplier inference)

Hey everyone, I've been experimenting with extreme LLM quantization following the BitNet 1.58b paper. While ternary quantization {-1, 0, ...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime