[2510.00236] Per-example gradients: a new frontier for understanding and improving optimizers
About this article
Abstract page for arXiv paper 2510.00236: Per-example gradients: a new frontier for understanding and improving optimizers
Computer Science > Machine Learning arXiv:2510.00236 (cs) [Submitted on 30 Sep 2025 (v1), last revised 2 Mar 2026 (this version, v2)] Title:Per-example gradients: a new frontier for understanding and improving optimizers Authors:Vincent Roulet, Atish Agarwala View a PDF of the paper titled Per-example gradients: a new frontier for understanding and improving optimizers, by Vincent Roulet and 1 other authors View PDF HTML (experimental) Abstract:Training algorithms in deep learning usually treat a mini-batch of samples as a single object; they average gradients over the mini-batch, and then process the average in various ways. Computing other statistics beyond the average may have been seen as prohibitively resource intensive in automatic differentiation (AD) frameworks. We show that this is not the case. Generally, gradient statistics can be implemented through a surgery of the AD graph, which, in some cases, incur almost no computational and memory overheads compared to the mini-batch gradient computation. Additionally, we show that in certain classes of models, including transformers, JAX's vectorization transformation offers a viable implementation for prototyping and experimentation. We then revise our understanding of two nonlinear operations in optimization through the lens of per-example gradient transformations. We first study signSGD and show that the optimal placement of the sign operation in the gradient processing chain is crucial to success and can be predicte...