[2509.23383] Train Once, Answer All: Many Pretraining Experiments for the Cost of One

[2509.23383] Train Once, Answer All: Many Pretraining Experiments for the Cost of One

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2509.23383: Train Once, Answer All: Many Pretraining Experiments for the Cost of One

Computer Science > Computation and Language arXiv:2509.23383 (cs) [Submitted on 27 Sep 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:Train Once, Answer All: Many Pretraining Experiments for the Cost of One Authors:Sebastian Bordt, Martin Pawelczyk View a PDF of the paper titled Train Once, Answer All: Many Pretraining Experiments for the Cost of One, by Sebastian Bordt and 1 other authors View PDF HTML (experimental) Abstract:Recent work has demonstrated that controlled pretraining experiments are a powerful tool for studying the relationship between training data and large language model (LLM) behavior. However, the computational cost of pretraining presents a significant constraint. To overcome this constraint, we propose a new approach where multiple experiments are conducted simultaneously during a single training run. We validate our approach by performing ten experiments while training on 210B tokens, with models of up to 2.7B parameters. Although models are trained only once, we can replicate the results of multiple previous works on data contamination, poisoning, and memorization. We also conduct novel investigations into knowledge acquisition, mathematical reasoning, and watermarking. For example, we dynamically update the training data until a model acquires a particular piece of knowledge. Remarkably, the influence of the experiments on the model's training dynamics and overall performance is minimal. However, interactions between experiments may ...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

Claude code x n8n

Hi everyone, I’ve been exploring MCP and integrating tools like n8n with Claude Code, and I’m trying to understand how practical this rea...

Reddit - Artificial Intelligence · 1 min ·
Llms

LLM comprehension question

Basically, does anyone else also get a really strange sense of lingering confusion and non-comprehension when an LLM explains a complex c...

Reddit - Artificial Intelligence · 1 min ·
Llms

Curated 550+ free AI tools useful for building projects (LLMs, APIs, local models, RAG, agents)

Over the last few days I was collecting free or low cost AI tools that are actually useful if you want to build stuff, not just try rando...

Reddit - Artificial Intelligence · 1 min ·
Claude Mythos and misguided open-weight fearmongering
Llms

Claude Mythos and misguided open-weight fearmongering

AI Tools & Products · 9 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime