[2507.13266] QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation
About this article
Abstract page for arXiv paper 2507.13266: QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation
Computer Science > Computation and Language arXiv:2507.13266 (cs) [Submitted on 17 Jul 2025 (v1), last revised 31 Mar 2026 (this version, v4)] Title:QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation Authors:Jiazheng Li, Hongzhou Lin, Hong Lu, Kaiyue Wen, Zaiwen Yang, Jiaxuan Gao, Yi Wu, Jingzhao Zhang View a PDF of the paper titled QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation, by Jiazheng Li and 7 other authors View PDF HTML (experimental) Abstract:Reinforcement learning (RL) has emerged as a central paradigm for training large language models (LLMs) in reasoning tasks. Yet recent studies question RL's ability to incentivize reasoning capacity beyond the base model. This raises a key challenge: how can RL be adapted to solve harder reasoning problems more effectively? To address this challenge, we propose a simple yet effective strategy via Question Augmentation: introduce partial solutions during training to reduce problem difficulty and provide more informative learning signals. Our method, QuestA, when applied during RL training on math reasoning tasks, not only improves pass@1 but also pass@k-particularly on problems where standard RL struggles to make progress. This enables continual improvement over strong open-source models such as DeepScaleR and OpenMath Nemotron, further enhancing their reasoning capabilities. We achieve new state-of-the-art results on math benchmarks using 1.5B-parameter models: 72.50% (+10.73%) on A...