[2412.00798] Combinatorial Rising Bandits
About this article
Abstract page for arXiv paper 2412.00798: Combinatorial Rising Bandits
Computer Science > Machine Learning arXiv:2412.00798 (cs) [Submitted on 1 Dec 2024 (v1), last revised 3 Mar 2026 (this version, v4)] Title:Combinatorial Rising Bandits Authors:Seockbean Song, Youngsik Yoon, Siwei Wang, Wei Chen, Jungseul Ok View a PDF of the paper titled Combinatorial Rising Bandits, by Seockbean Song and 4 other authors View PDF Abstract:Combinatorial online learning is a fundamental task for selecting the optimal action (or super arm) as a combination of base arms in sequential interactions with systems providing stochastic rewards. It is applicable to diverse domains such as robotics, social advertising, network routing, and recommendation systems. In many real-world scenarios, we often encounter rising rewards, where playing a base arm not only provides an instantaneous reward but also contributes to the enhancement of future rewards, e.g., robots improving through practice and social influence strengthening in the history of successful recommendations. Crucially, these enhancements may propagate to multiple super arms that share the same base arms, introducing dependencies beyond the scope of existing bandit models. To address this gap, we introduce the Combinatorial Rising Bandit (CRB) framework and propose a provably efficient and empirically effective algorithm, Combinatorial Rising Upper Confidence Bound (CRUCB). We empirically demonstrate the effectiveness of CRUCB in realistic deep reinforcement learning environments and synthetic settings, whil...