[2510.04067] What Scales in Cross-Entropy Scaling Law?
About this article
Abstract page for arXiv paper 2510.04067: What Scales in Cross-Entropy Scaling Law?
Computer Science > Machine Learning arXiv:2510.04067 (cs) [Submitted on 5 Oct 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:What Scales in Cross-Entropy Scaling Law? Authors:Junxi Yan, Zixi Wei, Qingyao Ai, Yiqun Liu, Jingtao Zhan View a PDF of the paper titled What Scales in Cross-Entropy Scaling Law?, by Junxi Yan and 4 other authors View PDF HTML (experimental) Abstract:The cross-entropy scaling law has long served as a key tool for guiding the development of large language models. It shows that cross-entropy loss decreases in a predictable power-law rate as the model size increases. However, recent evidence indicates that this law breaks down at very large scales: the loss decreases more slowly than expected, which causes significant trouble for developing large language models. In this paper, we hypothesize that the root cause lies in the fact that cross-entropy itself does not truly scale; instead, only one of its hidden components does. To investigate this, we introduce a novel decomposition of cross-entropy into three parts: Error-Entropy, Self-Alignment, and Confidence. We show both theoretically and empirically that this decomposition precisely captures the training dynamics and optimization objectives. Through extensive experiments on multiple datasets and 32 models spanning five orders of magnitude in size, we find that only error-entropy follows a robust power-law scaling, while the other two terms remain largely invariant. Moreover, error-entr...