[2509.25835] Chain-in-Tree: Back to Sequential Reasoning in LLM Tree Search
About this article
Abstract page for arXiv paper 2509.25835: Chain-in-Tree: Back to Sequential Reasoning in LLM Tree Search
Computer Science > Artificial Intelligence arXiv:2509.25835 (cs) [Submitted on 30 Sep 2025 (v1), last revised 10 Apr 2026 (this version, v4)] Title:Chain-in-Tree: Back to Sequential Reasoning in LLM Tree Search Authors:Xinzhe Li View a PDF of the paper titled Chain-in-Tree: Back to Sequential Reasoning in LLM Tree Search, by Xinzhe Li View PDF HTML (experimental) Abstract:Test-time scaling improves large language models (LLMs) on long-horizon reasoning tasks by allocating more compute at inference. LLM inference via tree search (LITS) achieves strong performance but is highly inefficient. We propose Chain-in-Tree (CiT), a plug-in framework that decides when to branch during search instead of expanding at every step. CiT introduces lightweight Branching Necessity (BN) evaluations, including BN-DP (direct prompting) and BN-SC (self-consistency). Integrated into Tree of Thoughts, ReST-MCTS, and RAP, BN-DP reduces token generation, model calls, and runtime by 75-85% on GSM8K and Math500, with often negligible or no accuracy loss. BN-SC typically yields substantial savings (up to 80%) generally but shows instability in 1-4 out of 14 settings, caused by a small subset of examples that produce extremely long reasoning steps. We theoretically prove that BN-DP never increases policy invocations and release unified implementations applicable across LITS frameworks. The full codebase is publicly available at this https URL. Comments: Subjects: Artificial Intelligence (cs.AI) Cite as:...