[2603.04135] Unbiased Dynamic Pruning for Efficient Group-Based Policy Optimization
About this article
Abstract page for arXiv paper 2603.04135: Unbiased Dynamic Pruning for Efficient Group-Based Policy Optimization
Computer Science > Machine Learning arXiv:2603.04135 (cs) [Submitted on 4 Mar 2026] Title:Unbiased Dynamic Pruning for Efficient Group-Based Policy Optimization Authors:Haodong Zhu, Yangyang Ren, Yanjing Li, Mingbao Lin, Linlin Yang, Xuhui Liu, Xiantong Zhen, Haiguang Liu, Baochang Zhang View a PDF of the paper titled Unbiased Dynamic Pruning for Efficient Group-Based Policy Optimization, by Haodong Zhu and Yangyang Ren and Yanjing Li and Mingbao Lin and Linlin Yang and Xuhui Liu and Xiantong Zhen and Haiguang Liu and Baochang Zhang View PDF HTML (experimental) Abstract:Group Relative Policy Optimization (GRPO) effectively scales LLM reasoning but incurs prohibitive computational costs due to its extensive group-based sampling requirement. While recent selective data utilization methods can mitigate this overhead, they could induce estimation bias by altering the underlying sampling distribution, compromising theoretical rigor and convergence behavior. To address this limitation, we propose Dynamic Pruning Policy Optimization (DPPO), a framework that enables dynamic pruning while preserving unbiased gradient estimation through importance sampling-based correction. By incorporating mathematically derived rescaling factors, DPPO significantly accelerates GRPO training without altering the optimization objective of the full-batch baseline. Furthermore, to mitigate the data sparsity induced by pruning, we introduce Dense Prompt Packing, a window-based greedy strategy that maxi...