[2603.21162] Revisiting Tree Search for LLMs: Gumbel and Sequential Halving for Budget-Scalable Reasoning
About this article
Abstract page for arXiv paper 2603.21162: Revisiting Tree Search for LLMs: Gumbel and Sequential Halving for Budget-Scalable Reasoning
Computer Science > Artificial Intelligence arXiv:2603.21162 (cs) [Submitted on 22 Mar 2026] Title:Revisiting Tree Search for LLMs: Gumbel and Sequential Halving for Budget-Scalable Reasoning Authors:Leonid Ugadiarov, Yuri Kuratov, Aleksandr Panov, Alexey Skrynnik View a PDF of the paper titled Revisiting Tree Search for LLMs: Gumbel and Sequential Halving for Budget-Scalable Reasoning, by Leonid Ugadiarov and 3 other authors View PDF HTML (experimental) Abstract:Neural tree search is a powerful decision-making algorithm widely used in complex domains such as game playing and model-based reinforcement learning. Recent work has applied AlphaZero-style tree search to enhance the reasoning capabilities of Large Language Models (LLMs) during inference, but we find that this approach suffers from a scaling failure: on GSM8K and Game24, accuracy drops as the search budget increases. In this paper, we present ReSCALE, an adaptation of Gumbel AlphaZero MCTS that replaces Dirichlet noise and PUCT selection with Gumbel sampling and Sequential Halving, restoring monotonic scaling without changes to the model or its training. ReSCALE reaches 58.4\% on GSM8K and 85.3\% on Game24 at budgets where the baseline degrades. Ablations confirm that Sequential Halving is the primary driver of the improvement. Comments: Subjects: Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2603.21162 [cs.AI] (or arXiv:2603.21162v1 [cs.AI] for this version) https://doi.org/10.48550...