[2602.17080] Adam Improves Muon: Adaptive Moment Estimation with Orthogonalized Momentum

[2602.17080] Adam Improves Muon: Adaptive Moment Estimation with Orthogonalized Momentum

arXiv - Machine Learning 4 min read Article

Summary

This article presents NAMO and NAMO-D, new optimizers that enhance adaptive moment estimation by integrating orthogonalized momentum, showing improved performance in large language model training.

Why It Matters

The development of NAMO and NAMO-D represents a significant advancement in optimization techniques for machine learning, particularly in training large language models. By addressing the limitations of existing methods like Adam and Muon, these optimizers offer enhanced convergence rates and performance, which are crucial for researchers and practitioners in the field.

Key Takeaways

  • NAMO and NAMO-D optimize adaptive moment estimation using orthogonalized momentum.
  • Both optimizers demonstrate superior performance over AdamW and Muon in experiments.
  • NAMO-D includes a clamping hyperparameter for improved noise adaptation.
  • The proposed optimizers maintain orthogonality while enhancing convergence rates.
  • Results indicate significant gains in training large language models like GPT-2.

Computer Science > Machine Learning arXiv:2602.17080 (cs) [Submitted on 19 Feb 2026] Title:Adam Improves Muon: Adaptive Moment Estimation with Orthogonalized Momentum Authors:Minxin Zhang, Yuxuan Liu, Hayden Scheaffer View a PDF of the paper titled Adam Improves Muon: Adaptive Moment Estimation with Orthogonalized Momentum, by Minxin Zhang and 2 other authors View PDF HTML (experimental) Abstract:Efficient stochastic optimization typically integrates an update direction that performs well in the deterministic regime with a mechanism adapting to stochastic perturbations. While Adam uses adaptive moment estimates to promote stability, Muon utilizes the weight layers' matrix structure via orthogonalized momentum, showing superior performance in large language model training. We propose a new optimizer and a diagonal extension, NAMO and NAMO-D, providing the first principled integration of orthogonalized momentum with norm-based Adam-type noise adaptation. NAMO scales orthogonalized momentum using a single adaptive stepsize, preserving orthogonality while improving upon Muon at negligible additional cost. NAMO-D instead right-multiplies orthogonalized momentum by a diagonal matrix with clamped entries. This design enables neuron-wise noise adaptation and aligns with the common near block-diagonal Hessian structure. Under standard assumptions, we establish optimal convergence rates for both algorithms in the deterministic setting and show that, in the stochastic setting, their ...

Related Articles

Llms

ParetoBandit: Budget-Paced Adaptive Routing for Non-Stationary LLM Serving

submitted by /u/PatienceHistorical70 [link] [comments]

Reddit - Machine Learning · 1 min ·
Llms

Stop Overcomplicating AI Workflows. This Is the Simple Framework

I’ve been working on building an agentic AI workflow system for business use cases and one thing became very clear very quickly. This is ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Lemonade 10.1 released for latest improvements for local LLMs on AMD GPUs & NPUs

submitted by /u/Fcking_Chuck [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

The Jose robot at the airport is just a trained parrot

Saw the news about Jose, the AI humanoid greeting passengers in California, speaking 50+ languages. Everyone's impressed by the language ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime