[2603.02091] Learning from Synthetic Data Improves Multi-hop Reasoning
About this article
Abstract page for arXiv paper 2603.02091: Learning from Synthetic Data Improves Multi-hop Reasoning
Computer Science > Machine Learning arXiv:2603.02091 (cs) [Submitted on 2 Mar 2026] Title:Learning from Synthetic Data Improves Multi-hop Reasoning Authors:Anmol Kabra, Yilun Yin, Albert Gong, Kamilė Stankevičiūtė, Dongyoung Go, Johann Lee, Katie Z. Luo, Carla P. Gomes, Kilian Q. Weinberger View a PDF of the paper titled Learning from Synthetic Data Improves Multi-hop Reasoning, by Anmol Kabra and 8 other authors View PDF HTML (experimental) Abstract:Reinforcement Learning (RL) has been shown to significantly boost reasoning capabilities of large language models (LLMs) in math, coding, and multi-hop reasoning tasks. However, RL fine-tuning requires abundant high-quality verifiable data, often sourced from human annotations, generated from frontier LLMs, or scored by LLM-based verifiers. All three have considerable limitations: human-annotated datasets are small and expensive to curate, LLM-generated data is hallucination-prone and costly, and LLM-based verifiers are inaccurate and slow. In this work, we investigate a cheaper alternative: RL fine-tuning on rule-generated synthetic data for multi-hop reasoning tasks. We discover that LLMs fine-tuned on synthetic data perform significantly better on popular real-world question-answering benchmarks, despite the synthetic data containing only fictional knowledge. On stratifying performance by question difficulty, we find that synthetic data teaches LLMs to compose knowledge -- a fundamental and generalizable reasoning skill. Ou...