[2602.13510] Stochastic variance reduced extragradient methods for solving hierarchical variational inequalities
Summary
This paper presents stochastic variance reduced extragradient methods for solving hierarchical variational inequalities, proving convergence rates and complexity for these algorithms in various settings.
Why It Matters
The research addresses complex optimization problems that are foundational in fields like machine learning and game theory. By improving convergence rates for hierarchical variational inequalities, this work can enhance algorithm efficiency in practical applications, making it relevant for researchers and practitioners in optimization and AI.
Key Takeaways
- Introduces variance reduced stochastic algorithms for hierarchical variational inequalities.
- Proves convergence rates and complexity statements for these algorithms.
- Applies to both Euclidean and Bregman setups, broadening applicability.
- Addresses key challenges in optimization with a two-level hierarchical structure.
- Significant implications for optimization in machine learning and game theory.
Mathematics > Optimization and Control arXiv:2602.13510 (math) [Submitted on 13 Feb 2026] Title:Stochastic variance reduced extragradient methods for solving hierarchical variational inequalities Authors:Pavel Dvurechensky, Andrea Ebner, Johannes Carl Schnebel, Shimrit Shtern, Mathias Staudigl View a PDF of the paper titled Stochastic variance reduced extragradient methods for solving hierarchical variational inequalities, by Pavel Dvurechensky and 4 other authors View PDF Abstract:We are concerned with optimization in a broad sense through the lens of solving variational inequalities (VIs) -- a class of problems that are so general that they cover as particular cases minimization of functions, saddle-point (minimax) problems, Nash equilibrium problems, and many others. The key challenges in our problem formulation are the two-level hierarchical structure and finite-sum representation of the smooth operators in each level. For this setting, we are the first to prove convergence rates and complexity statements for variance-reduced stochastic algorithms approaching the solution of hierarchical VIs in Euclidean and Bregman setups. Subjects: Optimization and Control (math.OC); Computer Science and Game Theory (cs.GT); Machine Learning (cs.LG) Cite as: arXiv:2602.13510 [math.OC] (or arXiv:2602.13510v1 [math.OC] for this version) https://doi.org/10.48550/arXiv.2602.13510 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Mathias...