[2506.09016] SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning

[2506.09016] SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2506.09016: SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning

Computer Science > Machine Learning arXiv:2506.09016 (cs) This paper has been withdrawn by Ruiqi Zhang [Submitted on 10 Jun 2025 (v1), last revised 4 Mar 2026 (this version, v3)] Title:SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning Authors:Ruiqi Zhang, Daman Arora, Song Mei, Andrea Zanette View a PDF of the paper titled SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning, by Ruiqi Zhang and 3 other authors No PDF available, click to view other formats Abstract:Training large language models with reinforcement learning (RL) against verifiable rewards significantly enhances their reasoning abilities, yet remains computationally expensive due to inefficient uniform prompt sampling. We introduce Selective Prompting with Efficient Estimation of Difficulty (SPEED), an adaptive online RL curriculum that selectively chooses training examples of intermediate difficulty to maximize learning efficiency. Theoretically, we establish that intermediate-difficulty prompts improve the gradient estimator's signal-to-noise ratio, accelerating convergence. Empirically, our efficient implementation leads to 2x to 6x faster training without degrading accuracy, requires no manual tuning, and integrates seamlessly into standard RL algorithms. Comments: Subjects: Machine Learning (cs.LG) Cite as: arXiv:2506.09016 [cs.LG]   (or arXiv:2506.09016v3 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2506.09016 Focus to learn more arXiv-iss...

Originally published on March 06, 2026. Curated by AI News.

Related Articles

Llms

[D] How to break free from LLM's chains as a PhD student?

I didn't realize but over a period of one year i have become overreliant on ChatGPT to write code, I am a second year PhD student and don...

Reddit - Machine Learning · 1 min ·
Llms

[R] Reference model free behavioral discovery of AudiBench model organisms via Probe-Mediated Adaptive Auditing

Anthropic's AuditBench - 56 Llama 3.3 70B models with planted hidden behaviors - their best agent detects the behaviros 10-13% of the tim...

Reddit - Machine Learning · 1 min ·
Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
Llms

I have been coding for 11 years and I caught myself completely unable to debug a problem without AI assistance last month. That scared me more than anything I have seen in this industry.

I want to be honest about something that happened to me because I think it is more common than people admit. Last month I hit a bug in a ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime