[2602.13177] Improved Regret Guarantees for Online Mirror Descent using a Portfolio of Mirror Maps

[2602.13177] Improved Regret Guarantees for Online Mirror Descent using a Portfolio of Mirror Maps

arXiv - Machine Learning 4 min read Article

Summary

This paper presents improved regret guarantees for Online Mirror Descent (OMD) by utilizing a portfolio of mirror maps, enhancing performance in online convex optimization, particularly for sparse loss functions.

Why It Matters

The findings provide significant advancements in online convex optimization, particularly in scenarios with sparse loss functions. By demonstrating that adaptive mirror map selection can yield polynomial improvements in regret, this research addresses critical challenges in optimizing decision-making processes in machine learning and related fields.

Key Takeaways

  • Adaptive mirror maps can significantly improve regret in OMD.
  • Block norm-based mirror maps outperform traditional $L_p$ interpolations for sparse losses.
  • Dynamic geometry selection is crucial for optimizing performance in unknown sparsity scenarios.
  • The proposed meta-algorithm effectively tunes OMD to exploit sparsity.
  • Polynomial improvements in regret can be achieved through strategic mirror map selection.

Mathematics > Optimization and Control arXiv:2602.13177 (math) [Submitted on 13 Feb 2026] Title:Improved Regret Guarantees for Online Mirror Descent using a Portfolio of Mirror Maps Authors:Swati Gupta, Jai Moondra, Mohit Singh View a PDF of the paper titled Improved Regret Guarantees for Online Mirror Descent using a Portfolio of Mirror Maps, by Swati Gupta and 2 other authors View PDF HTML (experimental) Abstract:OMD and its variants give a flexible framework for OCO where the performance depends crucially on the choice of the mirror map. While the geometries underlying OPGD and OEG, both special cases of OMD, are well understood, it remains a challenging open question on how to construct an optimal mirror map for any given constrained set and a general family of loss functions, e.g., sparse losses. Motivated by parameterizing a near-optimal set of mirror maps, we consider a simpler question: is it even possible to obtain polynomial gains in regret by using mirror maps for geometries that interpolate between $L_1$ and $L_2$, which may not be possible by restricting to only OEG ($L_1$) or OPGD ($L_2$). Our main result answers this question positively. We show that mirror maps based on block norms adapt better to the sparsity of loss functions, compared to previous $L_p$ (for $p \in [1, 2]$) interpolations. In particular, we construct a family of online convex optimization instances in $\mathbb{R}^d$, where block norm-based mirror maps achieve a provable polynomial (in $d$...

Related Articles

Machine Learning

Artificial intelligence - Machine Learning, Robotics, Algorithms

AI Events ·
Machine Learning

Fed Chair Jerome Powell, Treasury's Bessent and top bank CEOs met over Anthropic's Mythos model

submitted by /u/esporx [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
CoreWeave strikes a deal to power Anthropic's Claude AI models — and the stock surges 12%
Llms

CoreWeave strikes a deal to power Anthropic's Claude AI models — and the stock surges 12%

AI Tools & Products · 3 min ·
New AI model sparks alarm as governments brace for AI-driven cyberattacks
Machine Learning

New AI model sparks alarm as governments brace for AI-driven cyberattacks

AI Tools & Products · 6 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime