[2508.02833] TIC-GRPO: Provable and Efficient Optimization for Reinforcement Learning from Human Feedback
About this article
Abstract page for arXiv paper 2508.02833: TIC-GRPO: Provable and Efficient Optimization for Reinforcement Learning from Human Feedback
Computer Science > Machine Learning arXiv:2508.02833 (cs) [Submitted on 4 Aug 2025 (v1), last revised 5 Mar 2026 (this version, v3)] Title:TIC-GRPO: Provable and Efficient Optimization for Reinforcement Learning from Human Feedback Authors:Lei Pang, Jun Luo, Ruinan Jin View a PDF of the paper titled TIC-GRPO: Provable and Efficient Optimization for Reinforcement Learning from Human Feedback, by Lei Pang and 2 other authors View PDF Abstract:Group Relative Policy Optimization (GRPO), recently introduced by DeepSeek, is a critic-free reinforcement learning algorithm for fine-tuning large language models. GRPO replaces the value function in Proximal Policy Optimization (PPO) with group-normalized rewards while retaining PPO-style token-level importance sampling based on an old policy. Our theoretical analysis reveals that the GRPO update rule estimates the policy gradient at the old policy rather than the current one; however, since the old policy is refreshed every few steps, the resulting discrepancy remains small and the induced bias is negligible in practice. To empirically validate this insight, we conduct an ablation study that entirely removes importance sampling and performs multiple optimization steps using gradients estimated at a fixed old policy. Remarkably, this simplified variant attains performance comparable to standard GRPO. Motivated by this finding, we propose Trajectory-level Importance-Corrected GRPO (TIC-GRPO), a new algorithm that replaces token-level i...