[2601.21684] Do Not Waste Your Rollouts: Recycling Search Experience for Efficient Test-Time Scaling

[2601.21684] Do Not Waste Your Rollouts: Recycling Search Experience for Efficient Test-Time Scaling

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2601.21684: Do Not Waste Your Rollouts: Recycling Search Experience for Efficient Test-Time Scaling

Computer Science > Computation and Language arXiv:2601.21684 (cs) [Submitted on 29 Jan 2026 (v1), last revised 5 May 2026 (this version, v2)] Title:Do Not Waste Your Rollouts: Recycling Search Experience for Efficient Test-Time Scaling Authors:Xinglin Wang, Jiayi Shi, Shaoxiong Feng, Peiwen Yuan, Yiwei Li, Yueqi Zhang, Chuyi Tan, Ji Zhang, Boyuan Pan, Yao Hu, Kan Li View a PDF of the paper titled Do Not Waste Your Rollouts: Recycling Search Experience for Efficient Test-Time Scaling, by Xinglin Wang and 10 other authors View PDF Abstract:Test-Time Scaling enhances the reasoning capabilities of Large Language Models by allocating additional inference compute to broaden the exploration of the solution space. However, existing search strategies typically treat rollouts as disposable samples, where valuable intermediate insights are effectively discarded after each trial. This wasted rollout-level experience leads to substantial computational redundancy, as models repeatedly re-derive discovered conclusions and revisit known dead ends across extensive attempts. To bridge this gap, we propose \textbf{Recycling Search Experience (RSE)}, a self-guided, training-free strategy that turns test-time search from a series of isolated trials into a cumulative, experience-guided process. By actively distilling raw trajectories into a shared experience bank, RSE enables positive recycling of intermediate conclusions to shortcut redundant derivations and negative recycling of failure patte...

Originally published on May 06, 2026. Curated by AI News.

Related Articles

DeepSeek could hit $45B valuation from its first investment round | TechCrunch
Llms

DeepSeek could hit $45B valuation from its first investment round | TechCrunch

The Chinese AI lab came to prominence in early 2025 after launching a large language model that trained on a fraction of the compute powe...

TechCrunch - AI · 3 min ·
Llms

Be honest: How much of "Claude Mythos" is just hype?

I see people claiming Claude Mythos is the "final form" of LLM creativity, but I’m struggling to see the actual reach it might have. What...

Reddit - Artificial Intelligence · 1 min ·
Llms

Stop letting LLMs edit your .bib [D]

It’s shocking how frequently I notice hallucinated citations. For citations of my own papers, I’ve seen 5 in the past couple of months, w...

Reddit - Machine Learning · 1 min ·
[2601.06543] Mechanism-Faithful Queueing Simulation Model Translation with Large Language Model Support
Llms

[2601.06543] Mechanism-Faithful Queueing Simulation Model Translation with Large Language Model Support

Abstract page for arXiv paper 2601.06543: Mechanism-Faithful Queueing Simulation Model Translation with Large Language Model Support

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime