[2602.18292] Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers
Summary
This paper presents a novel framework for decoding in language models, proposing that decoding should be viewed as a principled optimization process. It introduces Best-of-K sampling as a method to enhance model performance by covering diverse alternatives.
Why It Matters
Understanding decoding as an optimization problem rather than a heuristic approach can significantly improve the effectiveness of language models. This research provides a structured method to develop new decoding strategies, potentially leading to advancements in natural language processing applications.
Key Takeaways
- Decoding should be treated as an optimization problem over the probability simplex.
- The proposed Best-of-K sampling method improves performance by covering good alternatives.
- This framework unifies various decoding strategies under a single optimization template.
Computer Science > Machine Learning arXiv:2602.18292 (cs) [Submitted on 20 Feb 2026] Title:Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers Authors:Xiaotong Ji, Rasul Tutunov, Matthieu Zimmer, Haitham Bou-Ammar View a PDF of the paper titled Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers, by Xiaotong Ji and 3 other authors View PDF HTML (experimental) Abstract:Decoding sits between a language model and everything we do with it, yet it is still treated as a heuristic knob-tuning exercise. We argue decoding should be understood as a principled optimisation layer: at each token, we solve a regularised problem over the probability simplex that trades off model score against structural preferences and constraints. This single template recovers greedy decoding, Softmax sampling, Top-K, Top-P, and Sparsemax-style sparsity as special cases, and explains their common structure through optimality conditions. More importantly, the framework makes it easy to invent new decoders without folklore. We demonstrate this by designing Best-of-K (BoK), a KL-anchored coverage objective aimed at multi-sample pipelines (self-consistency, reranking, verifier selection). BoK targets the probability of covering good alternatives within a fixed K-sample budget and improves empirical performance. We show that such samples can improve accuracy by, for example, +18.6% for Qwen2.5-Math-7B on...