[2506.18110] RL for Reasoning by Adaptively Revealing Rationales
About this article
Abstract page for arXiv paper 2506.18110: RL for Reasoning by Adaptively Revealing Rationales
Computer Science > Machine Learning arXiv:2506.18110 (cs) [Submitted on 22 Jun 2025 (v1), last revised 2 Mar 2026 (this version, v2)] Title:RL for Reasoning by Adaptively Revealing Rationales Authors:Mohammad Hossein Amani, Aryo Lotfi, Nicolas Mario Baldwin, Samy Bengio, Mehrdad Farajtabar, Emmanuel Abbe, Robert West View a PDF of the paper titled RL for Reasoning by Adaptively Revealing Rationales, by Mohammad Hossein Amani and 6 other authors View PDF HTML (experimental) Abstract:Learning in the combinatorially large output space of sequence generation problems is challenging as providing expert demonstrations scales poorly with sequence length, and RL struggles with sparse rewards. Between dense demonstrations in supervised training and no demonstrations in reinforcement learning lies an underexplored regime: partial supervision. We ask whether some classes of sequence learning problems become efficiently learnable by exploiting this gap. We address this by introducing adaptive backtracking (AdaBack), a per-sample curriculum learning algorithm that reveals a partial prefix of the target output. The supervision length is adjusted dynamically for each sample based on the model's past reward signal, allowing it to incrementally learn to complete reasoning chains by conditioning on correct partial solutions. We investigate this intermediate regime between SFT and RL and argue that per-sample curriculum learning is more than a trade-off between efficiency and generality--it ...