[2603.01293] Theoretical Perspectives on Data Quality and Synergistic Effects in Pre- and Post-Training Reasoning Models
About this article
Abstract page for arXiv paper 2603.01293: Theoretical Perspectives on Data Quality and Synergistic Effects in Pre- and Post-Training Reasoning Models
Computer Science > Machine Learning arXiv:2603.01293 (cs) [Submitted on 1 Mar 2026] Title:Theoretical Perspectives on Data Quality and Synergistic Effects in Pre- and Post-Training Reasoning Models Authors:Adel Javanmard, Baharan Mirzasoleiman, Vahab Mirrokni View a PDF of the paper titled Theoretical Perspectives on Data Quality and Synergistic Effects in Pre- and Post-Training Reasoning Models, by Adel Javanmard and 2 other authors View PDF Abstract:Large Language Models (LLMs) are pretrained on massive datasets and later instruction-tuned via supervised fine-tuning (SFT) or reinforcement learning (RL). Best practices emphasize large, diverse pretraining data, whereas post-training operates differently: SFT relies on smaller, high-quality datasets, while RL benefits more from scale, with larger amounts of feedback often outweighing label quality. Yet it remains unclear why pretraining and RL require large datasets, why SFT excels on smaller ones, and what defines high-quality SFT data. In this work, we theoretically analyze transformers trained on an in-context weight prediction task for linear regression. Our analysis reveals several key findings: $(i)$ balanced pretraining data can induce latent capabilities later activated during post-training, and $(ii)$ SFT learns best from a small set of examples challenging for the pretrained model, while excessively large SFT datasets may dilute informative pretraining signals. In contrast, RL is most effective on large-scale dat...