[2603.00742] To Use or not to Use Muon: How Simplicity Bias in Optimizers Matters
About this article
Abstract page for arXiv paper 2603.00742: To Use or not to Use Muon: How Simplicity Bias in Optimizers Matters
Computer Science > Machine Learning arXiv:2603.00742 (cs) [Submitted on 28 Feb 2026] Title:To Use or not to Use Muon: How Simplicity Bias in Optimizers Matters Authors:Sara Dragutinović, Rajesh Ranganath View a PDF of the paper titled To Use or not to Use Muon: How Simplicity Bias in Optimizers Matters, by Sara Dragutinovi\'c and 1 other authors View PDF HTML (experimental) Abstract:For a long period of time, Adam has served as the ubiquitous default choice for training deep neural networks. Recently, many new optimizers have been introduced, out of which Muon has perhaps gained the highest popularity due to its superior training speed. While many papers set out to validate the benefits of Muon, our paper investigates the potential downsides stemming from the mechanism driving this speedup. We explore the biases induced when optimizing with Muon, providing theoretical analysis and its consequences to the learning trajectories and solutions learned. While the theory does provide justification for the benefits Muon brings, it also guides our intuition when coming up with a couple of examples where Muon-optimized models have disadvantages. The core problem we emphasize is that Muon optimization removes a simplicity bias that is naturally preserved by older, more thoroughly studied methods like Stochastic Gradient Descent (SGD). We take first steps toward understanding consequences this may have: Muon might struggle to uncover common underlying structure across tasks, and be m...