[2602.20937] Extending $μ$P: Spectral Conditions for Feature Learning Across Optimizers
Summary
This article presents a framework for extending the maximal update parameterization ($μ$P) to various optimizers, enhancing feature learning in large language models by enabling effective hyperparameter transfer across model sizes.
Why It Matters
As large language models become increasingly complex, optimizing their training processes is crucial. This research addresses the challenge of hyperparameter tuning, which is often resource-intensive. By proposing a method that allows hyperparameters tuned on smaller models to be applied to larger ones, it could significantly reduce computational costs and improve training efficiency across various optimizers.
Key Takeaways
- Introduces a novel framework for deriving $μ$P for multiple optimizers.
- Demonstrates effective zero-shot learning rate transfer across model sizes.
- Provides empirical insights into depth-scaling parameterization for optimizers.
- Addresses the computational challenges of hyperparameter tuning in large models.
- Expands the applicability of $μ$P beyond traditional methods.
Computer Science > Machine Learning arXiv:2602.20937 (cs) [Submitted on 24 Feb 2026] Title:Extending $μ$P: Spectral Conditions for Feature Learning Across Optimizers Authors:Akshita Gupta, Marieme Ngom, Sam Foreman, Venkatram Vishwanath View a PDF of the paper titled Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers, by Akshita Gupta and 2 other authors View PDF HTML (experimental) Abstract:Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and de...