[2510.04067] What Scales in Cross-Entropy Scaling Law?

[2510.04067] What Scales in Cross-Entropy Scaling Law?

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2510.04067: What Scales in Cross-Entropy Scaling Law?

Computer Science > Machine Learning arXiv:2510.04067 (cs) [Submitted on 5 Oct 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:What Scales in Cross-Entropy Scaling Law? Authors:Junxi Yan, Zixi Wei, Qingyao Ai, Yiqun Liu, Jingtao Zhan View a PDF of the paper titled What Scales in Cross-Entropy Scaling Law?, by Junxi Yan and 4 other authors View PDF HTML (experimental) Abstract:The cross-entropy scaling law has long served as a key tool for guiding the development of large language models. It shows that cross-entropy loss decreases in a predictable power-law rate as the model size increases. However, recent evidence indicates that this law breaks down at very large scales: the loss decreases more slowly than expected, which causes significant trouble for developing large language models. In this paper, we hypothesize that the root cause lies in the fact that cross-entropy itself does not truly scale; instead, only one of its hidden components does. To investigate this, we introduce a novel decomposition of cross-entropy into three parts: Error-Entropy, Self-Alignment, and Confidence. We show both theoretically and empirically that this decomposition precisely captures the training dynamics and optimization objectives. Through extensive experiments on multiple datasets and 32 models spanning five orders of magnitude in size, we find that only error-entropy follows a robust power-law scaling, while the other two terms remain largely invariant. Moreover, error-entr...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

Curated 550+ free AI tools useful for building projects (LLMs, APIs, local models, RAG, agents)

Over the last few days I was collecting free or low cost AI tools that are actually useful if you want to build stuff, not just try rando...

Reddit - Artificial Intelligence · 1 min ·
Claude Mythos and misguided open-weight fearmongering
Llms

Claude Mythos and misguided open-weight fearmongering

AI Tools & Products · 9 min ·
Llms

Anthropic Agrees to Rent CoreWeave AI Capacity to Power Claude

AI Tools & Products · 1 min ·
CoreWeave strikes a deal to power Anthropic's Claude AI models — and the stock surges 12%
Llms

CoreWeave strikes a deal to power Anthropic's Claude AI models — and the stock surges 12%

AI Tools & Products · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime