[2601.06597] Understanding and inverse design of implicit bias in stochastic learning: a geometric perspective
About this article
Abstract page for arXiv paper 2601.06597: Understanding and inverse design of implicit bias in stochastic learning: a geometric perspective
Computer Science > Machine Learning arXiv:2601.06597 (cs) [Submitted on 10 Jan 2026 (v1), last revised 4 Apr 2026 (this version, v2)] Title:Understanding and inverse design of implicit bias in stochastic learning: a geometric perspective Authors:Nicola Aladrah, Emanuele Ballarin, Matteo Biagetti, Alessio Ansuini, Alberto d'Onofrio, Fabio Anselmi View a PDF of the paper titled Understanding and inverse design of implicit bias in stochastic learning: a geometric perspective, by Nicola Aladrah and 5 other authors View PDF HTML (experimental) Abstract:A key challenge in machine learning is to explain how learning dynamics select among the many solutions that achieve identical loss values in overparameterized models - a phenomenon known as implicit bias. Controlling this bias provides a direct mechanism on learned representations, which are central to interpretability, robustness, and reasoning in modern AI systems. Yet, despite its importance, existing explanations remain largely ad hoc and lack a unifying mechanism. We develop a theoretical and constructive framework in which implicit bias emerges as a geometric correction induced by the interplay between gradient noise and continuous symmetries of the loss. We compute the induced bias across a range of architectures, predicting new behaviors and explaining known ones. The approach also enables inverse design: by engineering predictor - preserving parameterizations, it is possible to shape the bias, with sparsity and spectral...