[2510.18814] A Model Can Help Itself: Reward-Free Self-Training for LLM Reasoning
About this article
Abstract page for arXiv paper 2510.18814: A Model Can Help Itself: Reward-Free Self-Training for LLM Reasoning
Computer Science > Machine Learning arXiv:2510.18814 (cs) [Submitted on 21 Oct 2025 (v1), last revised 6 Apr 2026 (this version, v2)] Title:A Model Can Help Itself: Reward-Free Self-Training for LLM Reasoning Authors:Mengqi Li, Lei Zhao, Anthony Man-Cho So, Ruoyu Sun, Xiao Li View a PDF of the paper titled A Model Can Help Itself: Reward-Free Self-Training for LLM Reasoning, by Mengqi Li and 4 other authors View PDF HTML (experimental) Abstract:Can language models improve their reasoning performance without external rewards, using only their own sampled responses for training? We show that they can. We propose Self-evolving Post-Training (SePT), a simple post-training method that alternates between self-generation and training on self-generated responses. It repeatedly samples questions, uses the model itself to generate low-temperature responses, and then finetunes the model on the self-generated data. In this self-training loop, we use an online data refresh mechanism, where each new batch is generated by the most recently updated model. Across six math reasoning benchmarks, SePT improves a strong no-training baseline, defined as the untuned base model evaluated at its best swept decoding temperature, on several tested models. In some settings, SePT can even approach the performance of Reinforcement Learning with Verifiable Rewards (RLVR). Additional ablations demonstrate the importance of online data refresh and temperature decoupling. Overall, our results identify a pr...