[2602.10953] Search or Accelerate: Confidence-Switched Position Beam Search for Diffusion Language Models
Summary
The paper presents SOAR, a novel decoding algorithm for Diffusion Language Models that adapts its search strategy based on model confidence, enhancing text generation quality while maintaining efficiency.
Why It Matters
This research addresses a critical challenge in language model decoding, particularly for complex reasoning tasks. By improving the balance between generation quality and inference speed, SOAR offers a practical solution for developers and researchers working with Diffusion Language Models, potentially leading to advancements in natural language processing applications.
Key Takeaways
- SOAR adapts decoding strategies based on model confidence levels.
- The algorithm improves text generation quality in reasoning-heavy tasks.
- It maintains competitive inference speed, balancing quality and efficiency.
- SOAR is training-free, making it accessible for immediate implementation.
- The research includes benchmarks on established datasets like GSM8K and HumanEval.
Computer Science > Computation and Language arXiv:2602.10953 (cs) [Submitted on 11 Feb 2026 (v1), last revised 25 Feb 2026 (this version, v2)] Title:Search or Accelerate: Confidence-Switched Position Beam Search for Diffusion Language Models Authors:Mingyu Cao, Alvaro H.C. Correia, Christos Louizos, Shiwei Liu, Lu Yin View a PDF of the paper titled Search or Accelerate: Confidence-Switched Position Beam Search for Diffusion Language Models, by Mingyu Cao and 4 other authors View PDF HTML (experimental) Abstract:Diffusion Language Models (DLMs) generate text by iteratively denoising a masked sequence, repeatedly deciding which positions to commit at each step. Standard decoding follows a greedy rule: unmask the most confident positions, yet this local choice can lock the model into a suboptimal unmasking order, especially on reasoning-heavy prompts. We present SOAR, a training-free decoding algorithm that adapts its behavior to the model's uncertainty. When confidence is low, SOAR briefly widens the search over alternative unmasking decisions to avoid premature commitments; when confidence is high, it collapses the search and decodes many positions in parallel to reduce the number of denoising iterations. Across mathematical reasoning and code generation benchmarks (GSM8K, MBPP, HumanEval) on Dream-7B and LLaDA-8B, SOAR improves generation quality while maintaining competitive inference speed, offering a practical way to balance quality and efficiency in DLM decoding. Our C...