[2510.03605] Understanding the Role of Training Data in Test-Time Scaling

[2510.03605] Understanding the Role of Training Data in Test-Time Scaling

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2510.03605: Understanding the Role of Training Data in Test-Time Scaling

Computer Science > Artificial Intelligence arXiv:2510.03605 (cs) [Submitted on 4 Oct 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:Understanding the Role of Training Data in Test-Time Scaling Authors:Adel Javanmard, Baharan Mirzasoleiman, Vahab Mirrokni View a PDF of the paper titled Understanding the Role of Training Data in Test-Time Scaling, by Adel Javanmard and 2 other authors View PDF HTML (experimental) Abstract:Test-time scaling improves the reasoning capabilities of large language models (LLMs) by allocating extra compute to generate longer Chains-of-Thoughts (CoTs). This enables models to tackle more complex problem by breaking them down into additional steps, backtracking, and correcting mistakes. Despite its strong performance--demonstrated by OpenAI's o1 and DeepSeek R1, the conditions in the training data under which long CoTs emerge, and when such long CoTs improve the performance, remain unclear. In this paper, we study the performance of test-time scaling for transformers trained on an in-context weight prediction task for linear regression. Our analysis provides a theoretical explanation for several intriguing observations: First, at any fixed test error, increasing test-time compute allows us to reduce the number of in-context examples (context length) in training prompts. Second, if the skills required to solve a downstream task are not sufficiently present in the training data, increasing test-time compute can harm performance. Finally, ...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

Claude developer hosts Christian leaders for AI summit

AI Tools & Products ·
CoreWeave stock pops 11% on deal to power Anthropic's Claude
Llms

CoreWeave stock pops 11% on deal to power Anthropic's Claude

AI Tools & Products · 3 min ·
Llms

I Trained for the Paris Marathon Using ChatGPT

AI Tools & Products · 1 min ·
Google API keys give attackers unauthorized Gemini AI access
Llms

Google API keys give attackers unauthorized Gemini AI access

Hackers exploit Google API keys to make Gemini AI run wild

AI Tools & Products · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime