[2506.09016] SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning
About this article
Abstract page for arXiv paper 2506.09016: SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning
Computer Science > Machine Learning arXiv:2506.09016 (cs) This paper has been withdrawn by Ruiqi Zhang [Submitted on 10 Jun 2025 (v1), last revised 4 Mar 2026 (this version, v3)] Title:SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning Authors:Ruiqi Zhang, Daman Arora, Song Mei, Andrea Zanette View a PDF of the paper titled SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning, by Ruiqi Zhang and 3 other authors No PDF available, click to view other formats Abstract:Training large language models with reinforcement learning (RL) against verifiable rewards significantly enhances their reasoning abilities, yet remains computationally expensive due to inefficient uniform prompt sampling. We introduce Selective Prompting with Efficient Estimation of Difficulty (SPEED), an adaptive online RL curriculum that selectively chooses training examples of intermediate difficulty to maximize learning efficiency. Theoretically, we establish that intermediate-difficulty prompts improve the gradient estimator's signal-to-noise ratio, accelerating convergence. Empirically, our efficient implementation leads to 2x to 6x faster training without degrading accuracy, requires no manual tuning, and integrates seamlessly into standard RL algorithms. Comments: Subjects: Machine Learning (cs.LG) Cite as: arXiv:2506.09016 [cs.LG] (or arXiv:2506.09016v3 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2506.09016 Focus to learn more arXiv-iss...