[2602.05165] EBPO: Empirical Bayes Shrinkage for Stabilizing Group-Relative Policy Optimization
Summary
The paper presents EBPO, a novel framework that enhances Group Relative Policy Optimization (GRPO) by employing Empirical Bayes shrinkage to stabilize learning in reinforcement learning tasks, particularly under small group sizes and challenging reward conditions.
Why It Matters
As reinforcement learning continues to evolve, addressing stability issues in policy optimization is crucial for improving model performance and reliability. EBPO offers a significant advancement by reducing estimator variance and enhancing training stability, which is vital for deploying AI systems in real-world applications.
Key Takeaways
- EBPO improves stability in reinforcement learning by using a shrinkage estimator.
- The framework reduces Mean Squared Error (MSE) compared to traditional GRPO methods.
- EBPO shows superior performance even with small group sizes and in difficult learning scenarios.
- The approach integrates global statistics to enhance local group-based learning.
- Empirical results demonstrate EBPO's effectiveness across various benchmarks.
Computer Science > Machine Learning arXiv:2602.05165 (cs) [Submitted on 5 Feb 2026 (v1), last revised 23 Feb 2026 (this version, v3)] Title:EBPO: Empirical Bayes Shrinkage for Stabilizing Group-Relative Policy Optimization Authors:Kevin Han, Yuhang Zhou, Mingze Gao, Gedi Zhou, Serena Li, Abhishek Kumar, Xiangjun Fan, Weiwei Li, Lizhu Zhang View a PDF of the paper titled EBPO: Empirical Bayes Shrinkage for Stabilizing Group-Relative Policy Optimization, by Kevin Han and 8 other authors View PDF HTML (experimental) Abstract:Reinforcement Learning with Verifiable Rewards (RLVR) has proven effective for enhancing the reasoning capabilities of Large Language Models (LLMs). However, dominant approaches like Group Relative Policy Optimization (GRPO) face critical stability challenges: they suffer from high estimator variance under computational constraints (small group sizes) and vanishing gradient signals in saturated failure regimes where all responses yield identical zero rewards. To address this, we propose Empirical Bayes Policy Optimization (EBPO), a novel framework that regularizes local group-based baselines by borrowing strength from the policy's accumulated global statistics. Instead of estimating baselines in isolation, EBPO employs a shrinkage estimator that dynamically balances local group statistics with a global prior updated via Welford's online algorithm. Theoretically, we demonstrate that EBPO guarantees strictly lower Mean Squared Error (MSE), bounded entropy d...