[2602.19069] Asking the Right Questions: Improving Reasoning with Generated Stepping Stones
Summary
This article explores how generated 'stepping stones' can enhance the reasoning capabilities of large language models (LLMs) in complex tasks, proposing a framework for generating effective intermediate questions.
Why It Matters
As LLMs are increasingly applied to complex reasoning tasks, understanding how to improve their performance through intermediate question generation is crucial. This research provides insights into enhancing LLM capabilities, which can lead to more effective applications in various fields, including education, programming, and AI development.
Key Takeaways
- Stepping stones, or intermediate questions, significantly aid LLMs in solving complex tasks.
- The ARQ framework introduces a method for generating useful stepping stones to improve reasoning.
- Fine-tuning LLMs on synthetic data can enhance their ability to generate effective stepping stones.
Computer Science > Artificial Intelligence arXiv:2602.19069 (cs) [Submitted on 22 Feb 2026] Title:Asking the Right Questions: Improving Reasoning with Generated Stepping Stones Authors:Hengyuan Hu, Tingchen Fu, Minqi Jiang, Alexander H Miller, Yoram Bachrach, Jakob Nicolaus Foerster View a PDF of the paper titled Asking the Right Questions: Improving Reasoning with Generated Stepping Stones, by Hengyuan Hu and 5 other authors View PDF HTML (experimental) Abstract:Recent years have witnessed tremendous progress in enabling LLMs to solve complex reasoning tasks such as math and coding. As we start to apply LLMs to harder tasks that they may not be able to solve in one shot, it is worth paying attention to their ability to construct intermediate stepping stones that prepare them to better solve the tasks. Examples of stepping stones include simplifications, alternative framings, or subproblems. We study properties and benefits of stepping stones in the context of modern reasoning LLMs via ARQ (\textbf{A}king the \textbf{R}ight \textbf{Q}uestions), our simple framework which introduces a question generator to the default reasoning pipeline. We first show that good stepping stone questions exist and are transferrable, meaning that good questions can be generated, and they substantially help LLMs of various capabilities in solving the target tasks. We next frame stepping stone generation as a post-training task and show that we can fine-tune LLMs to generate more useful stepping...