[2509.25762] OPPO: Accelerating PPO-based RLHF via Pipeline Overlap
About this article
Abstract page for arXiv paper 2509.25762: OPPO: Accelerating PPO-based RLHF via Pipeline Overlap
Computer Science > Machine Learning arXiv:2509.25762 (cs) [Submitted on 30 Sep 2025 (v1), last revised 5 Mar 2026 (this version, v2)] Title:OPPO: Accelerating PPO-based RLHF via Pipeline Overlap Authors:Kaizhuo Yan, Yingjie Yu, Yifan Yu, Haizhong Zheng, Fan Lai View a PDF of the paper titled OPPO: Accelerating PPO-based RLHF via Pipeline Overlap, by Kaizhuo Yan and 4 other authors View PDF HTML (experimental) Abstract:Proximal Policy Optimization (PPO)-based reinforcement learning from human feedback (RLHF) is a widely adopted paradigm for aligning large language models (LLMs) with human preferences. However, its training pipeline suffers from substantial inefficiencies due to sequential multi-model dependencies (e.g., reward model depends on actor outputs) and long-tail response lengths, where a few long responses straggle the stage completion. We present OPPO, a novel, lightweight, and model-agnostic PPO-based RLHF framework that improves training efficiency by overlapping pipeline execution. OPPO introduces two novel techniques: (1) Intra-step overlap, which streams upstream model outputs (e.g., actor model) in right-sized chunks, enabling the downstream model (e.g., reward) to begin prefill while the upstream continues decoding; and (2) Inter-step overlap, which adaptively overcommits a few prompts and defers long generations to future steps, mitigating tail latency without discarding partial work. OPPO integrates easily with existing PPO implementations with a lightwe...