[2601.18420] Gradient Regularized Natural Gradients
About this article
Abstract page for arXiv paper 2601.18420: Gradient Regularized Natural Gradients
Computer Science > Machine Learning arXiv:2601.18420 (cs) [Submitted on 26 Jan 2026 (v1), last revised 26 Mar 2026 (this version, v2)] Title:Gradient Regularized Natural Gradients Authors:Satya Prakash Dash, Hossein Abdi, Wei Pan, Samuel Kaski, Mingfei Sun View a PDF of the paper titled Gradient Regularized Natural Gradients, by Satya Prakash Dash and 4 other authors View PDF Abstract:Gradient regularization (GR) has been shown to improve the generalizability of trained models. While Natural Gradient Descent has been shown to accelerate optimization in the initial phase of training, little attention has been paid to how the training dynamics of second-order optimizers can benefit from GR. In this work, we propose Gradient-Regularized Natural Gradients (GRNG), a family of scalable second-order optimizers that integrate explicit gradient regularization with natural gradient updates. Our framework introduces two frequentist algorithms: Regularized Explicit Natural Gradient (RENG), which utilizes double backpropagation to explicitly minimize the gradient norm, and Regularized Implicit Natural Gradient (RING), which incorporates regularization implicitly into the update direction. We also propose a Bayesian variant based on a Regularized-Kalman formulation that eliminates the need for FIM inversion entirely. We establish convergence guarantees for GRNG, showing that gradient regularization improves stability and enables convergence to global minima. Empirically, we demonstrate ...