[2602.10727] Rising Multi-Armed Bandits with Known Horizons

[2602.10727] Rising Multi-Armed Bandits with Known Horizons

arXiv - Machine Learning 3 min read Article

Summary

The paper presents a novel approach to the Rising Multi-Armed Bandit (RMAB) problem, introducing CUmulative Reward Estimation UCB (CURE-UCB) to optimize decision-making based on known horizons, demonstrating its superiority in structured environments.

Why It Matters

This research addresses a critical gap in the RMAB framework by emphasizing horizon-dependent optimality, which is crucial for applications like hyperparameter tuning in machine learning. Understanding how to leverage known horizons can significantly enhance performance in various practical scenarios.

Key Takeaways

  • CURE-UCB integrates horizon knowledge to improve decision-making.
  • The optimal strategy in RMAB shifts based on the available budget.
  • The proposed method outperforms traditional horizon-agnostic strategies.
  • Rigorous analysis establishes new regret upper bounds for CURE-UCB.
  • Extensive experiments validate the method's effectiveness in structured environments.

Computer Science > Machine Learning arXiv:2602.10727 (cs) [Submitted on 11 Feb 2026 (v1), last revised 13 Feb 2026 (this version, v2)] Title:Rising Multi-Armed Bandits with Known Horizons Authors:Seockbean Song, Chenyu Gan, Youngsik Yoon, Siwei Wang, Wei Chen, Jungseul Ok View a PDF of the paper titled Rising Multi-Armed Bandits with Known Horizons, by Seockbean Song and 5 other authors View PDF HTML (experimental) Abstract:The Rising Multi-Armed Bandit (RMAB) framework models environments where expected rewards of arms increase with plays, which models practical scenarios where performance of each option improves with the repeated usage, such as in robotics and hyperparameter tuning. For instance, in hyperparameter tuning, the validation accuracy of a model configuration (arm) typically increases with each training epoch. A defining characteristic of RMAB is em horizon-dependent optimality: unlike standard settings, the optimal strategy here shifts dramatically depending on the available budget $T$. This implies that knowledge of $T$ yields significantly greater utility in RMAB, empowering the learner to align its decision-making with this shifting optimality. However, the horizon-aware setting remains underexplored. To address this, we propose a novel CUmulative Reward Estimation UCB (CURE-UCB) that explicitly integrates the horizon. We provide a rigorous analysis establishing a new regret upper bound and prove that our method strictly outperforms horizon-agnostic strate...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[D] Looking for definition of open-world ish learning problem

Hello! Recently I did a project where I initially had around 30 target classes. But at inference, the model had to be able to handle a lo...

Reddit - Machine Learning · 1 min ·
Mystery Shopping Meets Machine Learning: Can Algorithms Become the Ultimate Customer Experience Auditor?
Machine Learning

Mystery Shopping Meets Machine Learning: Can Algorithms Become the Ultimate Customer Experience Auditor?

Customer expectations across Africa are shifting faster than most organisations can track. A single inconsistent interaction can ignite a...

AI News - General · 8 min ·
Machine Learning

GitHub to Use User Data for AI Training by Default

submitted by /u/i-drake [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime