[2602.20937] Extending $μ$P: Spectral Conditions for Feature Learning Across Optimizers

[2602.20937] Extending $μ$P: Spectral Conditions for Feature Learning Across Optimizers

arXiv - Machine Learning 4 min read Article

Summary

This article presents a framework for extending the maximal update parameterization ($μ$P) to various optimizers, enhancing feature learning in large language models by enabling effective hyperparameter transfer across model sizes.

Why It Matters

As large language models become increasingly complex, optimizing their training processes is crucial. This research addresses the challenge of hyperparameter tuning, which is often resource-intensive. By proposing a method that allows hyperparameters tuned on smaller models to be applied to larger ones, it could significantly reduce computational costs and improve training efficiency across various optimizers.

Key Takeaways

  • Introduces a novel framework for deriving $μ$P for multiple optimizers.
  • Demonstrates effective zero-shot learning rate transfer across model sizes.
  • Provides empirical insights into depth-scaling parameterization for optimizers.
  • Addresses the computational challenges of hyperparameter tuning in large models.
  • Expands the applicability of $μ$P beyond traditional methods.

Computer Science > Machine Learning arXiv:2602.20937 (cs) [Submitted on 24 Feb 2026] Title:Extending $μ$P: Spectral Conditions for Feature Learning Across Optimizers Authors:Akshita Gupta, Marieme Ngom, Sam Foreman, Venkatram Vishwanath View a PDF of the paper titled Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers, by Akshita Gupta and 2 other authors View PDF HTML (experimental) Abstract:Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and de...

Related Articles

Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra | The Verge
Llms

Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra | The Verge

The popular combination of OpenClaw and Claude Code is being severed now that Anthropic has announced it will start charging subscribers ...

The Verge - AI · 4 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime