[2604.05114] $π^2$: Structure-Originated Reasoning Data Improves Long-Context Reasoning Ability of Large Language Models

[2604.05114] $π^2$: Structure-Originated Reasoning Data Improves Long-Context Reasoning Ability of Large Language Models

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.05114: $π^2$: Structure-Originated Reasoning Data Improves Long-Context Reasoning Ability of Large Language Models

Computer Science > Computation and Language arXiv:2604.05114 (cs) [Submitted on 6 Apr 2026] Title:$π^2$: Structure-Originated Reasoning Data Improves Long-Context Reasoning Ability of Large Language Models Authors:Quyet V. Do, Thinh Pham, Nguyen Nguyen, Sha Li, Pratibha Zunjare, Tu Vu View a PDF of the paper titled $\pi^2$: Structure-Originated Reasoning Data Improves Long-Context Reasoning Ability of Large Language Models, by Quyet V. Do and 5 other authors View PDF HTML (experimental) Abstract:We study a pipeline that curates reasoning data from initial structured data for improving long-context reasoning in large language models (LLMs). Our approach, $\pi^2$, constructs high-quality reasoning data through rigorous QA curation: 1) extracting and expanding tables from Wikipedia, 2) from the collected tables and relevant context, generating realistic and multi-hop analytical reasoning questions whose answers are automatically determined and verified through dual-path code execution, and 3) back-translating step-by-step structured reasoning traces as solutions of QA pairs given realistic web-search context. Supervised fine-tuning with \textsc{\small{gpt-oss-20b}} and \textsc{\small{Qwen3-4B-Instruct-2507}} on $\pi^2$ yields consistent improvements across four long-context reasoning benchmarks and our alike $\pi^2$-Bench, with average absolute accuracy gains of +4.3% and +2.7% respectively. Notably, our dataset facilitates self-distillation, where \textsc{\small{gpt-oss-20b}...

Originally published on April 08, 2026. Curated by AI News.

Related Articles

Llms

Zoom + Claude Connector

Zoom have just launched their Claude Connector bringing a whole host of data & information into your Claude workspace. As a Claude Co...

Reddit - Artificial Intelligence · 1 min ·
Llms

Must your chatbot rat you out?

New court cases may take chatbot conversations another step away from privacy You may recall that court cases have recently held users’ c...

Reddit - Artificial Intelligence · 1 min ·
[2512.07703] PVeRA: Probabilistic Vector-Based Random Matrix Adaptation
Llms

[2512.07703] PVeRA: Probabilistic Vector-Based Random Matrix Adaptation

Abstract page for arXiv paper 2512.07703: PVeRA: Probabilistic Vector-Based Random Matrix Adaptation

arXiv - Machine Learning · 4 min ·
[2506.09110] CodeBrain: Bridging Decoupled Tokenizer and Multi-Scale Architecture for EEG Foundation Model
Llms

[2506.09110] CodeBrain: Bridging Decoupled Tokenizer and Multi-Scale Architecture for EEG Foundation Model

Abstract page for arXiv paper 2506.09110: CodeBrain: Bridging Decoupled Tokenizer and Multi-Scale Architecture for EEG Foundation Model

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime