[2602.13510] Stochastic variance reduced extragradient methods for solving hierarchical variational inequalities

[2602.13510] Stochastic variance reduced extragradient methods for solving hierarchical variational inequalities

arXiv - Machine Learning 3 min read Article

Summary

This paper presents stochastic variance reduced extragradient methods for solving hierarchical variational inequalities, proving convergence rates and complexity for these algorithms in various settings.

Why It Matters

The research addresses complex optimization problems that are foundational in fields like machine learning and game theory. By improving convergence rates for hierarchical variational inequalities, this work can enhance algorithm efficiency in practical applications, making it relevant for researchers and practitioners in optimization and AI.

Key Takeaways

  • Introduces variance reduced stochastic algorithms for hierarchical variational inequalities.
  • Proves convergence rates and complexity statements for these algorithms.
  • Applies to both Euclidean and Bregman setups, broadening applicability.
  • Addresses key challenges in optimization with a two-level hierarchical structure.
  • Significant implications for optimization in machine learning and game theory.

Mathematics > Optimization and Control arXiv:2602.13510 (math) [Submitted on 13 Feb 2026] Title:Stochastic variance reduced extragradient methods for solving hierarchical variational inequalities Authors:Pavel Dvurechensky, Andrea Ebner, Johannes Carl Schnebel, Shimrit Shtern, Mathias Staudigl View a PDF of the paper titled Stochastic variance reduced extragradient methods for solving hierarchical variational inequalities, by Pavel Dvurechensky and 4 other authors View PDF Abstract:We are concerned with optimization in a broad sense through the lens of solving variational inequalities (VIs) -- a class of problems that are so general that they cover as particular cases minimization of functions, saddle-point (minimax) problems, Nash equilibrium problems, and many others. The key challenges in our problem formulation are the two-level hierarchical structure and finite-sum representation of the smooth operators in each level. For this setting, we are the first to prove convergence rates and complexity statements for variance-reduced stochastic algorithms approaching the solution of hierarchical VIs in Euclidean and Bregman setups. Subjects: Optimization and Control (math.OC); Computer Science and Game Theory (cs.GT); Machine Learning (cs.LG) Cite as: arXiv:2602.13510 [math.OC]   (or arXiv:2602.13510v1 [math.OC] for this version)   https://doi.org/10.48550/arXiv.2602.13510 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Mathias...

Related Articles

Machine Learning

[D] ICML 26 - What to do with the zero follow-up questions

Hello everyone. I submitted my work to ICML 26 this year, and it got somewhat above average reviews. Now, in the rebuttal acknowledgment,...

Reddit - Machine Learning · 1 min ·
Startup Battlefield 200 applications open until May 27 | TechCrunch
Nlp

Startup Battlefield 200 applications open until May 27 | TechCrunch

Nominate your startup, or one you know, and apply for a chance at VC access, TechCrunch coverage, and $100K for Startup Battlefield 200.

TechCrunch - AI · 4 min ·
[2603.24326] Boosting Document Parsing Efficiency and Performance with Coarse-to-Fine Visual Processing
Llms

[2603.24326] Boosting Document Parsing Efficiency and Performance with Coarse-to-Fine Visual Processing

Abstract page for arXiv paper 2603.24326: Boosting Document Parsing Efficiency and Performance with Coarse-to-Fine Visual Processing

arXiv - AI · 4 min ·
[2601.13508] Autonomous Computational Catalysis Research via Agentic Systems
Nlp

[2601.13508] Autonomous Computational Catalysis Research via Agentic Systems

Abstract page for arXiv paper 2601.13508: Autonomous Computational Catalysis Research via Agentic Systems

arXiv - AI · 3 min ·
More in Nlp: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime