[2510.03605] Understanding the Role of Training Data in Test-Time Scaling
About this article
Abstract page for arXiv paper 2510.03605: Understanding the Role of Training Data in Test-Time Scaling
Computer Science > Artificial Intelligence arXiv:2510.03605 (cs) [Submitted on 4 Oct 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:Understanding the Role of Training Data in Test-Time Scaling Authors:Adel Javanmard, Baharan Mirzasoleiman, Vahab Mirrokni View a PDF of the paper titled Understanding the Role of Training Data in Test-Time Scaling, by Adel Javanmard and 2 other authors View PDF HTML (experimental) Abstract:Test-time scaling improves the reasoning capabilities of large language models (LLMs) by allocating extra compute to generate longer Chains-of-Thoughts (CoTs). This enables models to tackle more complex problem by breaking them down into additional steps, backtracking, and correcting mistakes. Despite its strong performance--demonstrated by OpenAI's o1 and DeepSeek R1, the conditions in the training data under which long CoTs emerge, and when such long CoTs improve the performance, remain unclear. In this paper, we study the performance of test-time scaling for transformers trained on an in-context weight prediction task for linear regression. Our analysis provides a theoretical explanation for several intriguing observations: First, at any fixed test error, increasing test-time compute allows us to reduce the number of in-context examples (context length) in training prompts. Second, if the skills required to solve a downstream task are not sufficiently present in the training data, increasing test-time compute can harm performance. Finally, ...