[2602.07712] Towards Robust Scaling Laws for Optimizers
Summary
This paper explores the scaling laws for various optimizers in machine learning, proposing a robust framework for comparing their performance as model size and training data increase.
Why It Matters
Understanding how different optimizers behave under scaling conditions is crucial for improving the efficiency and effectiveness of large language models. This research addresses gaps in existing studies that typically fix the optimizer, offering insights that could lead to better optimization strategies and model performance.
Key Takeaways
- Existing Chinchilla-style scaling laws for optimizers are ill-conditioned and correlate poorly.
- A new robust scaling law with shared power-law exponents is proposed for better optimizer comparison.
- Theoretical analysis shows that Chinchilla-style scaling laws can emerge from loss decomposition.
Computer Science > Machine Learning arXiv:2602.07712 (cs) [Submitted on 7 Feb 2026 (v1), last revised 24 Feb 2026 (this version, v2)] Title:Towards Robust Scaling Laws for Optimizers Authors:Alexandra Volkova, Mher Safaryan, Christoph H. Lampert, Dan Alistarh View a PDF of the paper titled Towards Robust Scaling Laws for Optimizers, by Alexandra Volkova and 3 other authors View PDF HTML (experimental) Abstract:The quality of Large Language Model (LLM) pretraining depends on multiple factors, including the compute budget and the choice of optimization algorithm. Empirical scaling laws are widely used to predict loss as model size and training data grow, however, almost all existing studies fix the optimizer (typically AdamW). At the same time, a new generation of optimizers (e.g., Muon, Shampoo, SOAP) promises faster and more stable convergence, but their relationship with model and data scaling is not yet well understood. In this work, we study scaling laws across different optimizers. Empirically, we show that 1) separate Chinchilla-style scaling laws for each optimizer are ill-conditioned and have highly correlated parameters. Instead, 2) we propose a more robust law with shared power-law exponents and optimizer-specific rescaling factors, which enable direct comparison between optimizers. Finally, 3) we provide a theoretical analysis of gradient-based methods for the proxy task of a convex quadratic objective, demonstrating that Chinchilla-style scaling laws emerge natu...