[2509.25835] Chain-in-Tree: Back to Sequential Reasoning in LLM Tree Search

[2509.25835] Chain-in-Tree: Back to Sequential Reasoning in LLM Tree Search

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2509.25835: Chain-in-Tree: Back to Sequential Reasoning in LLM Tree Search

Computer Science > Artificial Intelligence arXiv:2509.25835 (cs) [Submitted on 30 Sep 2025 (v1), last revised 10 Apr 2026 (this version, v4)] Title:Chain-in-Tree: Back to Sequential Reasoning in LLM Tree Search Authors:Xinzhe Li View a PDF of the paper titled Chain-in-Tree: Back to Sequential Reasoning in LLM Tree Search, by Xinzhe Li View PDF HTML (experimental) Abstract:Test-time scaling improves large language models (LLMs) on long-horizon reasoning tasks by allocating more compute at inference. LLM inference via tree search (LITS) achieves strong performance but is highly inefficient. We propose Chain-in-Tree (CiT), a plug-in framework that decides when to branch during search instead of expanding at every step. CiT introduces lightweight Branching Necessity (BN) evaluations, including BN-DP (direct prompting) and BN-SC (self-consistency). Integrated into Tree of Thoughts, ReST-MCTS, and RAP, BN-DP reduces token generation, model calls, and runtime by 75-85% on GSM8K and Math500, with often negligible or no accuracy loss. BN-SC typically yields substantial savings (up to 80%) generally but shows instability in 1-4 out of 14 settings, caused by a small subset of examples that produce extremely long reasoning steps. We theoretically prove that BN-DP never increases policy invocations and release unified implementations applicable across LITS frameworks. The full codebase is publicly available at this https URL. Comments: Subjects: Artificial Intelligence (cs.AI) Cite as:...

Originally published on April 13, 2026. Curated by AI News.

Related Articles

Llms

I am not an "anti" like this guy, but still an interesting video of person interacting with chat 4o

(Posting Here because removed by Chatgpt Complaints moderators because the model here is 4o, and refuse to believe there were any safety ...

Reddit - Artificial Intelligence · 1 min ·
Llms

We built a way for two people's AI context to talk to each other (without sharing their conversations)

We've been thinking about how we use AI in our relationships. Big part of it is about other people. Talking about them, figuring out what...

Reddit - Artificial Intelligence · 1 min ·
No flattery please, Claude: I’m British | Brief letters
Llms

No flattery please, Claude: I’m British | Brief letters

AI Tools & Products · 2 min ·
Llms

Unsolved AI Mystery Is Solved Along With Lessons Learned On Why ChatGPT Became Oddly Obsessed With Gremlins And Goblins

This article discusses the resolution of an AI mystery regarding ChatGPT's unusual focus on gremlins and goblins, along with insights gai...

AI Tools & Products · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime