[2510.26491] Data-Efficient RLVR via Off-Policy Influence Guidance
About this article
Abstract page for arXiv paper 2510.26491: Data-Efficient RLVR via Off-Policy Influence Guidance
Computer Science > Machine Learning arXiv:2510.26491 (cs) [Submitted on 30 Oct 2025 (v1), last revised 15 Apr 2026 (this version, v2)] Title:Data-Efficient RLVR via Off-Policy Influence Guidance Authors:Erle Zhu, Dazhi Jiang, Yuan Wang, Xujun Li, Jiale Cheng, Yuxian Gu, Yilin Niu, Aohan Zeng, Jie Tang, Minlie Huang, Hongning Wang View a PDF of the paper titled Data-Efficient RLVR via Off-Policy Influence Guidance, by Erle Zhu and 10 other authors View PDF HTML (experimental) Abstract:Data selection is a critical aspect of Reinforcement Learning with Verifiable Rewards (RLVR) for enhancing the reasoning capabilities of large language models (LLMs). Current data selection methods are largely heuristic-based, lacking theoretical guarantees and generalizability. This work proposes a theoretically-grounded approach using influence functions to estimate the contribution of each data point to the learning objective. To overcome the prohibitive computational cost of policy rollouts required for online influence estimation, we introduce an off-policy influence estimation method that efficiently approximates data influence using pre-collected offline trajectories. Furthermore, to manage the high-dimensional gradients of LLMs, we employ sparse random projection to reduce dimensionality and improve storage and computation efficiency. Leveraging these techniques, we develop \textbf{C}urriculum \textbf{R}L with \textbf{O}ff-\textbf{P}olicy \text{I}nfluence guidance (\textbf{CROPI}), a m...