[2602.18997] Implicit Bias and Convergence of Matrix Stochastic Mirror Descent
Summary
This paper explores the convergence properties of Matrix Stochastic Mirror Descent (SMD) in overparameterized settings, proving that it converges to a global interpolator and minimizing Bregman divergence, thus shedding light on implicit bias in high-dimensional problems.
Why It Matters
Understanding the convergence behavior of Matrix SMD is crucial for advancements in machine learning, particularly in multi-class classification and matrix completion tasks. The findings provide insights into how matrix mirror functions influence inductive bias, which is essential for developing more effective algorithms in high-dimensional spaces.
Key Takeaways
- Matrix SMD converges exponentially in overparameterized regimes.
- The algorithm minimizes Bregman divergence from initialization.
- Matrix mirror functions dictate inductive bias in multi-output problems.
- Findings are relevant for multi-class classification and matrix completion.
- Results generalize classical implicit bias results from vector SMD.
Statistics > Machine Learning arXiv:2602.18997 (stat) [Submitted on 22 Feb 2026] Title:Implicit Bias and Convergence of Matrix Stochastic Mirror Descent Authors:Danil Akhtiamov, Reza Ghane, Babak Hassibi View a PDF of the paper titled Implicit Bias and Convergence of Matrix Stochastic Mirror Descent, by Danil Akhtiamov and 1 other authors View PDF HTML (experimental) Abstract:We investigate Stochastic Mirror Descent (SMD) with matrix parameters and vector-valued predictions, a framework relevant to multi-class classification and matrix completion problems. Focusing on the overparameterized regime, where the total number of parameters exceeds the number of training samples, we prove that SMD with matrix mirror functions $\psi(\cdot)$ converges exponentially to a global interpolator. Furthermore, we generalize classical implicit bias results of vector SMD by demonstrating that the matrix SMD algorithm converges to the unique solution minimizing the Bregman divergence induced by $\psi(\cdot)$ from initialization subject to interpolating the data. These findings reveal how matrix mirror maps dictate inductive bias in high-dimensional, multi-output problems. Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Optimization and Control (math.OC) Cite as: arXiv:2602.18997 [stat.ML] (or arXiv:2602.18997v1 [stat.ML] for this version) https://doi.org/10.48550/arXiv.2602.18997 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history F...