[2602.20578] Upper-Linearizability of Online Non-Monotone DR-Submodular Maximization over Down-Closed Convex Sets

[2602.20578] Upper-Linearizability of Online Non-Monotone DR-Submodular Maximization over Down-Closed Convex Sets

arXiv - Machine Learning 3 min read Article

Summary

This paper explores online maximization of non-monotone Diminishing-Return submodular functions over down-closed convex sets, presenting a new structural result that enhances existing methods and improves static and dynamic regret bounds.

Why It Matters

The findings contribute to the field of online optimization, addressing limitations of current projection-free methods. By achieving improved regret bounds, this research has implications for various applications in machine learning and decision-making processes.

Key Takeaways

  • Introduces a new structural result for online maximization of non-monotone DR-submodular functions.
  • Achieves $O(T^{1/2})$ static regret with a single gradient query per round.
  • Unlocks adaptive and dynamic regret guarantees across various feedback models.
  • Improves state-of-the-art results in the field of online optimization.
  • Provides insights into exponential reparametrization and surrogate potentials.

Computer Science > Machine Learning arXiv:2602.20578 (cs) [Submitted on 24 Feb 2026] Title:Upper-Linearizability of Online Non-Monotone DR-Submodular Maximization over Down-Closed Convex Sets Authors:Yiyang Lu, Haresh Jadav, Mohammad Pedramfar, Ranveer Singh, Vaneet Aggarwal View a PDF of the paper titled Upper-Linearizability of Online Non-Monotone DR-Submodular Maximization over Down-Closed Convex Sets, by Yiyang Lu and 4 other authors View PDF HTML (experimental) Abstract:We study online maximization of non-monotone Diminishing-Return(DR)-submodular functions over down-closed convex sets, a regime where existing projection-free online methods suffer from suboptimal regret and limited feedback guarantees. Our main contribution is a new structural result showing that this class is $1/e$-linearizable under carefully designed exponential reparametrization, scaling parameter, and surrogate potential, enabling a reduction to online linear optimization. As a result, we obtain $O(T^{1/2})$ static regret with a single gradient query per round and unlock adaptive and dynamic regret guarantees, together with improved rates under semi-bandit, bandit, and zeroth-order feedback. Across all feedback models, our bounds strictly improve the state of the art. Subjects: Machine Learning (cs.LG); Optimization and Control (math.OC); Machine Learning (stat.ML) Cite as: arXiv:2602.20578 [cs.LG]   (or arXiv:2602.20578v1 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2602.20578 Focu...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

AI assistants are optimized to seem helpful. That is not the same thing as being helpful.

RLHF trains models on human feedback. Humans rate responses they like. And it turns out humans consistently rate confident, fluent, agree...

Reddit - Artificial Intelligence · 1 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
Llms

Study: LLMs Able to De-Anonymize User Accounts on Reddit, Hacker News & Other "Pseudonymous" Platforms; Report Co-Author Expands, Advises

Advice from the study's co-author: "Be aware that it’s not any single post that identifies you, but the combination of small details acro...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime