[2603.23562] Synthetic Mixed Training: Scaling Parametric Knowledge Acquisition Beyond RAG

[2603.23562] Synthetic Mixed Training: Scaling Parametric Knowledge Acquisition Beyond RAG

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.23562: Synthetic Mixed Training: Scaling Parametric Knowledge Acquisition Beyond RAG

Computer Science > Machine Learning arXiv:2603.23562 (cs) [Submitted on 24 Mar 2026] Title:Synthetic Mixed Training: Scaling Parametric Knowledge Acquisition Beyond RAG Authors:Seungju Han, Konwoo Kim, Chanwoo Park, Benjamin Newman, Suhas Kotha, Jaehun Jung, James Zou, Yejin Choi View a PDF of the paper titled Synthetic Mixed Training: Scaling Parametric Knowledge Acquisition Beyond RAG, by Seungju Han and Konwoo Kim and Chanwoo Park and Benjamin Newman and Suhas Kotha and Jaehun Jung and James Zou and Yejin Choi View PDF HTML (experimental) Abstract:Synthetic data augmentation helps language models learn new knowledge in data-constrained domains. However, naively scaling existing synthetic data methods by training on more synthetic tokens or using stronger generators yields diminishing returns below the performance of RAG. To break the RAG ceiling, we introduce Synthetic Mixed Training, which combines synthetic QAs and synthetic documents. This leverages their complementary training signals, and enables log-linear improvements as both synthetic data volume and generator strength increase. This allows the model to outperform RAG by a 2.6\% relative gain on QuaLITY, a long-document reading comprehension benchmark. In addition, we introduce Focal Rewriting, a simple technique for synthetic document generation that explicitly conditions document generation on specific questions, improving the diversity of synthetic documents and yielding a steeper log-linear scaling curve. On...

Originally published on March 26, 2026. Curated by AI News.

Related Articles

Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
Llms

built an open source CLI that auto generates AI setup files for your projects just hit 150 stars

hey everyone, been working on this side project called ai-setup and just hit a milestone i wanted to share 150 github stars, 90 PRs merge...

Reddit - Artificial Intelligence · 1 min ·
Llms

built an open source tool that auto generates AI context files for any codebase, 150 stars in

one of the most tedious parts of working with AI coding tools is having to manually write context files every single time. CLAUDE.md, .cu...

Reddit - Artificial Intelligence · 1 min ·
Find out what’s new in the Gemini app in March's Gemini Drop.
Llms

Find out what’s new in the Gemini app in March's Gemini Drop.

Gemini Drops is our regular monthly update on how to get the most out of the Gemini app.

AI Tools & Products · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime