[2509.23383] Train Once, Answer All: Many Pretraining Experiments for the Cost of One
About this article
Abstract page for arXiv paper 2509.23383: Train Once, Answer All: Many Pretraining Experiments for the Cost of One
Computer Science > Computation and Language arXiv:2509.23383 (cs) [Submitted on 27 Sep 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:Train Once, Answer All: Many Pretraining Experiments for the Cost of One Authors:Sebastian Bordt, Martin Pawelczyk View a PDF of the paper titled Train Once, Answer All: Many Pretraining Experiments for the Cost of One, by Sebastian Bordt and 1 other authors View PDF HTML (experimental) Abstract:Recent work has demonstrated that controlled pretraining experiments are a powerful tool for studying the relationship between training data and large language model (LLM) behavior. However, the computational cost of pretraining presents a significant constraint. To overcome this constraint, we propose a new approach where multiple experiments are conducted simultaneously during a single training run. We validate our approach by performing ten experiments while training on 210B tokens, with models of up to 2.7B parameters. Although models are trained only once, we can replicate the results of multiple previous works on data contamination, poisoning, and memorization. We also conduct novel investigations into knowledge acquisition, mathematical reasoning, and watermarking. For example, we dynamically update the training data until a model acquires a particular piece of knowledge. Remarkably, the influence of the experiments on the model's training dynamics and overall performance is minimal. However, interactions between experiments may ...