[2602.15863] Not the Example, but the Process: How Self-Generated Examples Enhance LLM Reasoning
Summary
This paper explores how self-generated examples can enhance reasoning in Large Language Models (LLMs), emphasizing the process of example creation over the examples themselves.
Why It Matters
Understanding the mechanics behind self-generated examples can significantly improve the design of prompting strategies for LLMs, leading to better performance in reasoning tasks. This research sheds light on effective techniques that can be applied in various AI applications, making it relevant for researchers and practitioners in the field of AI and machine learning.
Key Takeaways
- Self-generated examples improve LLM reasoning, but the creation process is key.
- Integrated prompting outperforms Zero-shot and Decoupled prompting strategies.
- Attention analysis reveals distinct patterns between prompting strategies.
- The findings provide insights for developing more effective LLM prompting techniques.
- Understanding these mechanisms can enhance AI applications across various domains.
Computer Science > Computation and Language arXiv:2602.15863 (cs) [Submitted on 26 Jan 2026] Title:Not the Example, but the Process: How Self-Generated Examples Enhance LLM Reasoning Authors:Daehoon Gwak, Minseo Jung, Junwoo Park, Minho Park, ChaeHun Park, Junha Hyung, Jaegul Choo View a PDF of the paper titled Not the Example, but the Process: How Self-Generated Examples Enhance LLM Reasoning, by Daehoon Gwak and 6 other authors View PDF HTML (experimental) Abstract:Recent studies have shown that Large Language Models (LLMs) can improve their reasoning performance through self-generated few-shot examples, achieving results comparable to manually curated in-context examples. However, the underlying mechanism behind these gains remains unclear, making it hard to decide when and how to apply the technique effectively. In this work, we argue that the key benefit arises not from the generated examples themselves but from the act of creating them. To validate this, on reasoning-intensive tasks across diverse LLM architectures, we systematically evaluate three prompting strategies for in-context learning: (1) Zero-shot prompting; (2) Integrated prompting, where LLMs create and solve problems within a single, unified prompt; and (3) Decoupled prompting, where self-generated examples are reused as in-context examples, but the context of their creation itself is excluded. We conduct experiments across five widely used model architectures, demonstrating that Integrated prompting con...