[2601.21343] Self-Improving Pretraining: using post-trained models to pretrain better models

[2601.21343] Self-Improving Pretraining: using post-trained models to pretrain better models

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2601.21343: Self-Improving Pretraining: using post-trained models to pretrain better models

Computer Science > Computation and Language arXiv:2601.21343 (cs) [Submitted on 29 Jan 2026 (v1), last revised 5 Apr 2026 (this version, v3)] Title:Self-Improving Pretraining: using post-trained models to pretrain better models Authors:Ellen Xiaoqing Tan, Jack Lanchantin, Shehzaad Dhuliawala, Danwei Li, Thao Nguyen, Jing Xu, Ping Yu, Ilia Kulikov, Sainbayar Sukhbaatar, Jason Weston, Xian Li, Olga Golovneva View a PDF of the paper titled Self-Improving Pretraining: using post-trained models to pretrain better models, by Ellen Xiaoqing Tan and 11 other authors View PDF HTML (experimental) Abstract:Large language models are classically trained in stages: pretraining on raw text followed by post-training for instruction following and reasoning. However, this separation creates a fundamental limitation: many desirable behaviors such as safety, factuality, overall generation quality, and reasoning ability are only added at a late stage, even though the patterns learned earlier strongly shape a model's capabilities. To tackle this issue, we introduce a new way to pretrain and mid-train models that incorporates these behaviors earlier. We utilize an existing strong, post-trained model to both rewrite pretraining data and to judge policy model rollouts, thus using reinforcement earlier in training. In our experiments, we show this can give strong gains in quality, safety, factuality and reasoning. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine ...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

How do you test AI agents in production? The unpredictability is overwhelming.[D]

I’ve been in QA for almost a decade. My mental model for quality was always: given input X, assert output Y. Now I’m on a team that’s shi...

Reddit - Machine Learning · 1 min ·
Llms

Confusing Website

i'm trying to find a video online and couldn't so i asked ChatGPT by describing the video and i was given a link and i'm trying to make s...

Reddit - Artificial Intelligence · 1 min ·
Llms

I tested the same prompt across multiple AI models… the differences surprised me

I’ve been experimenting with different AI models lately (ChatGPT, Claude, etc.), and I tried something simple: Using the exact same promp...

Reddit - Artificial Intelligence · 1 min ·
Anthropic gave Claude $100 to go shopping, here’s what the AI ended up buying
Llms

Anthropic gave Claude $100 to go shopping, here’s what the AI ended up buying

Anthropic’s AI experiment showed Claude independently handled 186 deals worth over $4,000, but results varied by model capability, with u...

AI Tools & Products · 5 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime