[2512.08217] Correction of Decoupled Weight Decay

[2512.08217] Correction of Decoupled Weight Decay

arXiv - Machine Learning 3 min read Article

Summary

This article discusses the correction of decoupled weight decay in machine learning, challenging the conventional assumption that it should be proportional to the learning rate. The author presents findings that suggest a proportional relationship to the square of the learning...

Why It Matters

Understanding weight decay is crucial for optimizing machine learning models. This research provides insights that could enhance model performance and training stability, which is vital for practitioners aiming to improve their algorithms and results in various applications.

Key Takeaways

  • Decoupled weight decay's relationship to learning rate is re-evaluated.
  • Proportionality to the square of the learning rate may enhance training dynamics.
  • Stable weight and gradient norms can improve model performance.
  • The study challenges existing assumptions in weight decay practices.
  • Empirical verification supports the proposed adjustments in weight decay.

Computer Science > Machine Learning arXiv:2512.08217 (cs) [Submitted on 9 Dec 2025 (v1), last revised 20 Feb 2026 (this version, v2)] Title:Correction of Decoupled Weight Decay Authors:Jason Chuan-Chih Chou View a PDF of the paper titled Correction of Decoupled Weight Decay, by Jason Chuan-Chih Chou View PDF HTML (experimental) Abstract:Decoupled weight decay, solely responsible for the performance advantage of AdamW over Adam, has long been set to proportional to learning rate $\gamma$ without questioning. Some researchers have recently challenged such assumption and argued that decoupled weight decay should be set $\propto \gamma^2$ instead based on orthogonality arguments at steady state. To the contrary, we find that eliminating the contribution of the perpendicular component of the update to the weight norm leads to little change to the training dynamics. Instead, we derive that decoupled weight decay $\propto \gamma^2$ results in stable weight norm based on the simple assumption that updates become independent of the weights at steady state, regardless of the nature of the optimizer. Based on the same assumption, we derive and empirically verify that the Total Update Contribution (TUC) of a minibatch under the Scion optimizer is better characterized by the momentum-dependent effective learning rate whose optimal value transfers and we show that decoupled weight decay $\propto \gamma^2$ leads to stable weight and gradient norms and allows us to better control the trai...

Related Articles

Machine Learning

[HIRING] Machine Learning Evaluation Specialist | Remote | $50/hr

​ We are onboarding domain experts with strong machine learning knowledge to design advanced evaluation tasks for AI systems. About the R...

Reddit - ML Jobs · 1 min ·
Machine Learning

Japan is adopting robotics and physical AI, with a model where startups innovate and corporations provide scale

Physical AI is emerging as one of the next major industrial battlegrounds, with Japan’s push driven more by necessity than anything else....

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

mining hardware doing AI training - is the output actually useful

there's this network that launched recently routing crypto mining hardware toward AI training workloads. miners seem happy with the econo...

Reddit - Artificial Intelligence · 1 min ·
AI is changing how small online sellers decide what to make | MIT Technology Review
Machine Learning

AI is changing how small online sellers decide what to make | MIT Technology Review

Entrepreneurs based in the US are using tools like Alibaba’s Accio to compress weeks of product research and supplier hunting into a sing...

MIT Technology Review · 8 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime