[2602.03773] Reasoning Cache: Continual Improvement Over Long Horizons via Short-Horizon RL

[2602.03773] Reasoning Cache: Continual Improvement Over Long Horizons via Short-Horizon RL

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2602.03773: Reasoning Cache: Continual Improvement Over Long Horizons via Short-Horizon RL

Computer Science > Machine Learning arXiv:2602.03773 (cs) [Submitted on 3 Feb 2026 (v1), last revised 22 Mar 2026 (this version, v2)] Title:Reasoning Cache: Continual Improvement Over Long Horizons via Short-Horizon RL Authors:Ian Wu, Yuxiao Qu, Amrith Setlur, Aviral Kumar View a PDF of the paper titled Reasoning Cache: Continual Improvement Over Long Horizons via Short-Horizon RL, by Ian Wu and 3 other authors View PDF Abstract:Large Language Models (LLMs) that can continually improve beyond their training budgets are able to solve increasingly difficult problems by adapting at test time, a property we refer to as extrapolation. However, standard reinforcement learning (RL) operates over fixed problem distributions and training budgets, which limits extrapolation amidst distribution shift at test time. To address this, we introduce RC, an iterative decoding algorithm that replaces standard autoregressive decoding during both training and inference. RC exploits an asymmetry between the response generation and summarization capabilities of LLMs to construct reasoning chains that consistently improve across iterations. Models trained to use RC can extrapolate and continually improve over reasoning horizons more than an order of magnitude longer than those seen during training. Empirically, training a 4B model with RC using a 16k-token training budget improves performance on HMMT 2025 from 40% to nearly 70% with 0.5m tokens at test time, outperforming both comparably sized mo...

Originally published on March 24, 2026. Curated by AI News.

Related Articles

Llms

World models will be the next big thing, bye-bye LLMs

Was at Nvidia's GTC conference recently and honestly, it was one of the most eye-opening events I've attended in a while. There was a lot...

Reddit - Artificial Intelligence · 1 min ·
Llms

we open sourced a tool that auto generates your AI agent context from your actual codebase, just hit 250 stars

hey everyone. been lurking here for a while and wanted to share something we been building. the problem: ai coding agents are only as goo...

Reddit - Artificial Intelligence · 1 min ·
Llms

I Accidentally Discovered a Security Vulnerability in AI Education — Then Submitted It To a $200K Competition

Last night I was testing Maestro University, the first fully AI-taught university. I walked into their enrollment chatbot and asked it to...

Reddit - Artificial Intelligence · 1 min ·
Llms

Is anyone else concerned with this blatant potential of security / privacy breach?

Recently, when sending a very sensitive email to my brother including my mother’s health information, I wondered what happens if a recipi...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime