[2604.05114] $π^2$: Structure-Originated Reasoning Data Improves Long-Context Reasoning Ability of Large Language Models
About this article
Abstract page for arXiv paper 2604.05114: $π^2$: Structure-Originated Reasoning Data Improves Long-Context Reasoning Ability of Large Language Models
Computer Science > Computation and Language arXiv:2604.05114 (cs) [Submitted on 6 Apr 2026] Title:$π^2$: Structure-Originated Reasoning Data Improves Long-Context Reasoning Ability of Large Language Models Authors:Quyet V. Do, Thinh Pham, Nguyen Nguyen, Sha Li, Pratibha Zunjare, Tu Vu View a PDF of the paper titled $\pi^2$: Structure-Originated Reasoning Data Improves Long-Context Reasoning Ability of Large Language Models, by Quyet V. Do and 5 other authors View PDF HTML (experimental) Abstract:We study a pipeline that curates reasoning data from initial structured data for improving long-context reasoning in large language models (LLMs). Our approach, $\pi^2$, constructs high-quality reasoning data through rigorous QA curation: 1) extracting and expanding tables from Wikipedia, 2) from the collected tables and relevant context, generating realistic and multi-hop analytical reasoning questions whose answers are automatically determined and verified through dual-path code execution, and 3) back-translating step-by-step structured reasoning traces as solutions of QA pairs given realistic web-search context. Supervised fine-tuning with \textsc{\small{gpt-oss-20b}} and \textsc{\small{Qwen3-4B-Instruct-2507}} on $\pi^2$ yields consistent improvements across four long-context reasoning benchmarks and our alike $\pi^2$-Bench, with average absolute accuracy gains of +4.3% and +2.7% respectively. Notably, our dataset facilitates self-distillation, where \textsc{\small{gpt-oss-20b}...