[2602.19362] LLMs Can Learn to Reason Via Off-Policy RL

[2602.19362] LLMs Can Learn to Reason Via Off-Policy RL

arXiv - Machine Learning 4 min read Article

Summary

The paper presents a novel off-policy reinforcement learning algorithm, OAPL, for Large Language Models (LLMs) that enhances reasoning capabilities without requiring on-policy adaptations.

Why It Matters

This research addresses the limitations of current reinforcement learning approaches in LLMs, particularly the inefficiencies caused by policy lag. By introducing OAPL, the authors provide a more effective method for training LLMs, which could lead to advancements in AI reasoning and performance in coding tasks.

Key Takeaways

  • OAPL outperforms traditional methods like GRPO with importance sampling.
  • The new algorithm allows for efficient training with significant policy lag.
  • Models trained with OAPL show improved performance on coding benchmarks.
  • OAPL requires fewer training generations compared to existing methods.
  • The research contributes to the understanding of off-policy learning in LLMs.

Computer Science > Machine Learning arXiv:2602.19362 (cs) [Submitted on 22 Feb 2026] Title:LLMs Can Learn to Reason Via Off-Policy RL Authors:Daniel Ritter, Owen Oertell, Bradley Guo, Jonathan Chang, Kianté Brantley, Wen Sun View a PDF of the paper titled LLMs Can Learn to Reason Via Off-Policy RL, by Daniel Ritter and 5 other authors View PDF HTML (experimental) Abstract:Reinforcement learning (RL) approaches for Large Language Models (LLMs) frequently use on-policy algorithms, such as PPO or GRPO. However, policy lag from distributed training architectures and differences between the training and inference policies break this assumption, making the data off-policy by design. To rectify this, prior work has focused on making this off-policy data appear more on-policy, either via importance sampling (IS), or by more closely aligning the training and inference policies by explicitly modifying the inference engine. In this work, we embrace off-policyness and propose a novel off-policy RL algorithm that does not require these modifications: Optimal Advantage-based Policy Optimization with Lagged Inference policy (OAPL). We show that OAPL outperforms GRPO with importance sampling on competition math benchmarks, and can match the performance of a publicly available coding model, DeepCoder, on LiveCodeBench, while using 3x fewer generations during training. We further empirically demonstrate that models trained via OAPL have improved test time scaling under the Pass@k metric. OA...

Related Articles

Llms

OpenClaw security checklist: practical safeguards for AI agents

Here is one of the better quality guides on the ensuring safety when deploying OpenClaw: https://chatgptguide.ai/openclaw-security-checkl...

Reddit - Artificial Intelligence · 1 min ·
I let Gemini in Google Maps plan my day and it went surprisingly well | The Verge
Llms

I let Gemini in Google Maps plan my day and it went surprisingly well | The Verge

Gemini in Google Maps is a surprisingly useful way to explore new territory.

The Verge - AI · 11 min ·
Llms

The person who replaces you probably won't be AI. It'll be someone from the next department over who learned to use it - opinion/discussion

I'm a strategy person by background. Two years ago I'd write a recommendation and hand it to a product team. Now.. I describe what I want...

Reddit - Artificial Intelligence · 1 min ·
Block Resets Management With AI As Cash App Adds Installment Transfers
Llms

Block Resets Management With AI As Cash App Adds Installment Transfers

Block (NYSE:XYZ) plans a permanent organizational overhaul that replaces many middle management roles with AI-driven models to create fla...

AI Tools & Products · 5 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime