[2510.26303] Implicit Bias of Per-sample Adam on Separable Data: Departure from the Full-batch Regime
About this article
Abstract page for arXiv paper 2510.26303: Implicit Bias of Per-sample Adam on Separable Data: Departure from the Full-batch Regime
Computer Science > Machine Learning arXiv:2510.26303 (cs) [Submitted on 30 Oct 2025 (v1), last revised 4 Mar 2026 (this version, v3)] Title:Implicit Bias of Per-sample Adam on Separable Data: Departure from the Full-batch Regime Authors:Beomhan Baek, Minhak Song, Chulhee Yun View a PDF of the paper titled Implicit Bias of Per-sample Adam on Separable Data: Departure from the Full-batch Regime, by Beomhan Baek and 2 other authors View PDF HTML (experimental) Abstract:Adam [Kingma & Ba, 2015] is the de facto optimizer in deep learning, yet its theoretical understanding remains limited. Prior analyses show that Adam favors solutions aligned with $\ell_\infty$-geometry, but these results are restricted to the full-batch regime. In this work, we study the implicit bias of incremental Adam (using one sample per step) for logistic regression on linearly separable data, and show that its bias can deviate from the full-batch behavior. As an extreme example, we construct datasets on which incremental Adam provably converges to the $\ell_2$-max-margin classifier, in contrast to the $\ell_\infty$-max-margin bias of full-batch Adam. For general datasets, we characterize its bias using a proxy algorithm for the $\beta_2 \to 1$ limit. This proxy maximizes a data-adaptive Mahalanobis-norm margin, whose associated covariance matrix is determined by a data-dependent dual fixed-point formulation. We further present concrete datasets where this bias reduces to the standard $\ell_2$- and $\ell...