[2603.08022] Capacity-Aware Mixture Law Enables Efficient LLM Data Optimization

[2603.08022] Capacity-Aware Mixture Law Enables Efficient LLM Data Optimization

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.08022: Capacity-Aware Mixture Law Enables Efficient LLM Data Optimization

Computer Science > Machine Learning arXiv:2603.08022 (cs) [Submitted on 9 Mar 2026 (v1), last revised 6 May 2026 (this version, v2)] Title:Capacity-Aware Mixture Law Enables Efficient LLM Data Optimization Authors:Jingwei Li, Xinran Gu, Jingzhao Zhang View a PDF of the paper titled Capacity-Aware Mixture Law Enables Efficient LLM Data Optimization, by Jingwei Li and 2 other authors View PDF HTML (experimental) Abstract:A data mixture refers to how different data sources are combined to train large language models, and selecting an effective mixture is crucial for optimal downstream performance. Existing methods either conduct costly searches directly on the target model or rely on mixture scaling laws that fail to extrapolate well to large model sizes. We address these limitations by introducing a compute-efficient pipeline for data mixture scaling. First, we propose CAMEL, a capacity-aware mixture law that models validation loss with the nonlinear interplay between model size and mixture. We also introduce a loss-to-benchmark prediction law that estimates benchmark accuracy from validation loss, enabling end-to-end performance prediction for the target model. Next, we study how to allocate a fixed compute budget across model scales to fit the law and reduce prediction error. Finally, we apply our method to Mixture-of-Experts models with up to 7B-A150M parameters to fit the law, and verify the optimal mixture derived from the law by extrapolating to a 55B-A1.2B target mode...

Originally published on May 07, 2026. Curated by AI News.

Related Articles

Llms

I am not an "anti" like this guy, but still an interesting video of person interacting with chat 4o

(Posting Here because removed by Chatgpt Complaints moderators because the model here is 4o, and refuse to believe there were any safety ...

Reddit - Artificial Intelligence · 1 min ·
Llms

We built a way for two people's AI context to talk to each other (without sharing their conversations)

We've been thinking about how we use AI in our relationships. Big part of it is about other people. Talking about them, figuring out what...

Reddit - Artificial Intelligence · 1 min ·
No flattery please, Claude: I’m British | Brief letters
Llms

No flattery please, Claude: I’m British | Brief letters

AI Tools & Products · 2 min ·
Llms

Unsolved AI Mystery Is Solved Along With Lessons Learned On Why ChatGPT Became Oddly Obsessed With Gremlins And Goblins

This article discusses the resolution of an AI mystery regarding ChatGPT's unusual focus on gremlins and goblins, along with insights gai...

AI Tools & Products · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime