[2506.15872] Hidden Breakthroughs in Language Model Training
About this article
Abstract page for arXiv paper 2506.15872: Hidden Breakthroughs in Language Model Training
Computer Science > Machine Learning arXiv:2506.15872 (cs) [Submitted on 18 Jun 2025 (v1), last revised 27 Feb 2026 (this version, v3)] Title:Hidden Breakthroughs in Language Model Training Authors:Sara Kangaslahti, Elan Rosenfeld, Naomi Saphra View a PDF of the paper titled Hidden Breakthroughs in Language Model Training, by Sara Kangaslahti and 2 other authors View PDF HTML (experimental) Abstract:Loss curves are smooth during most of model training, so visible discontinuities stand out as possible conceptual breakthroughs. Studying these breakthroughs enables a deeper understanding of learning dynamics, but only when they are properly identified. This paper argues that similar breakthroughs occur frequently throughout training but they are obscured by a loss metric that collapses all variation into a single scalar. To find these hidden transitions, we introduce POLCA, a method for decomposing changes in loss along arbitrary bases of the low-rank training subspace. We use our method to identify clusters of samples that share similar changes in loss during training, disaggregating the overall loss into that of smaller groups of conceptually similar data. We validate our method on synthetic arithmetic and natural language tasks, showing that POLCA recovers clusters that represent interpretable breakthroughs in the model's capabilities. We demonstrate the promise of these hidden phase transitions as a tool for unsupervised interpretability. Comments: Subjects: Machine Learni...