[2508.02833] TIC-GRPO: Provable and Efficient Optimization for Reinforcement Learning from Human Feedback

[2508.02833] TIC-GRPO: Provable and Efficient Optimization for Reinforcement Learning from Human Feedback

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2508.02833: TIC-GRPO: Provable and Efficient Optimization for Reinforcement Learning from Human Feedback

Computer Science > Machine Learning arXiv:2508.02833 (cs) [Submitted on 4 Aug 2025 (v1), last revised 5 Mar 2026 (this version, v3)] Title:TIC-GRPO: Provable and Efficient Optimization for Reinforcement Learning from Human Feedback Authors:Lei Pang, Jun Luo, Ruinan Jin View a PDF of the paper titled TIC-GRPO: Provable and Efficient Optimization for Reinforcement Learning from Human Feedback, by Lei Pang and 2 other authors View PDF Abstract:Group Relative Policy Optimization (GRPO), recently introduced by DeepSeek, is a critic-free reinforcement learning algorithm for fine-tuning large language models. GRPO replaces the value function in Proximal Policy Optimization (PPO) with group-normalized rewards while retaining PPO-style token-level importance sampling based on an old policy. Our theoretical analysis reveals that the GRPO update rule estimates the policy gradient at the old policy rather than the current one; however, since the old policy is refreshed every few steps, the resulting discrepancy remains small and the induced bias is negligible in practice. To empirically validate this insight, we conduct an ablation study that entirely removes importance sampling and performs multiple optimization steps using gradients estimated at a fixed old policy. Remarkably, this simplified variant attains performance comparable to standard GRPO. Motivated by this finding, we propose Trajectory-level Importance-Corrected GRPO (TIC-GRPO), a new algorithm that replaces token-level i...

Originally published on March 06, 2026. Curated by AI News.

Related Articles

Llms

[R] Reference model free behavioral discovery of AudiBench model organisms via Probe-Mediated Adaptive Auditing

Anthropic's AuditBench - 56 Llama 3.3 70B models with planted hidden behaviors - their best agent detects the behaviros 10-13% of the tim...

Reddit - Machine Learning · 1 min ·
Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
Llms

I have been coding for 11 years and I caught myself completely unable to debug a problem without AI assistance last month. That scared me more than anything I have seen in this industry.

I want to be honest about something that happened to me because I think it is more common than people admit. Last month I hit a bug in a ...

Reddit - Artificial Intelligence · 1 min ·
Llms

OpenClaw security checklist: practical safeguards for AI agents

Here is one of the better quality guides on the ensuring safety when deploying OpenClaw: https://chatgptguide.ai/openclaw-security-checkl...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime