[2602.18292] Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers

[2602.18292] Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a novel framework for decoding in language models, proposing that decoding should be viewed as a principled optimization process. It introduces Best-of-K sampling as a method to enhance model performance by covering diverse alternatives.

Why It Matters

Understanding decoding as an optimization problem rather than a heuristic approach can significantly improve the effectiveness of language models. This research provides a structured method to develop new decoding strategies, potentially leading to advancements in natural language processing applications.

Key Takeaways

  • Decoding should be treated as an optimization problem over the probability simplex.
  • The proposed Best-of-K sampling method improves performance by covering good alternatives.
  • This framework unifies various decoding strategies under a single optimization template.

Computer Science > Machine Learning arXiv:2602.18292 (cs) [Submitted on 20 Feb 2026] Title:Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers Authors:Xiaotong Ji, Rasul Tutunov, Matthieu Zimmer, Haitham Bou-Ammar View a PDF of the paper titled Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers, by Xiaotong Ji and 3 other authors View PDF HTML (experimental) Abstract:Decoding sits between a language model and everything we do with it, yet it is still treated as a heuristic knob-tuning exercise. We argue decoding should be understood as a principled optimisation layer: at each token, we solve a regularised problem over the probability simplex that trades off model score against structural preferences and constraints. This single template recovers greedy decoding, Softmax sampling, Top-K, Top-P, and Sparsemax-style sparsity as special cases, and explains their common structure through optimality conditions. More importantly, the framework makes it easy to invent new decoders without folklore. We demonstrate this by designing Best-of-K (BoK), a KL-anchored coverage objective aimed at multi-sample pipelines (self-consistency, reranking, verifier selection). BoK targets the probability of covering good alternatives within a fixed K-sample budget and improves empirical performance. We show that such samples can improve accuracy by, for example, +18.6% for Qwen2.5-Math-7B on...

Related Articles

Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
Anthropic blocks OpenClaw from Claude subscriptions
Llms

Anthropic blocks OpenClaw from Claude subscriptions

Anthropic forces pay-as-you-go pricing for OpenClaw users after creator joins OpenAI

AI Tools & Products · 6 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime