[2602.15972] Fast Online Learning with Gaussian Prior-Driven Hierarchical Unimodal Thompson Sampling
Summary
This paper presents a novel approach to Multi-Armed Bandit problems using Gaussian prior-driven hierarchical unimodal Thompson Sampling, demonstrating improved performance in clustered environments.
Why It Matters
The research addresses the efficiency of decision-making in uncertain environments, particularly in applications like mmWave communications and portfolio management. By improving the regret bounds of existing algorithms, it offers significant advancements in online learning strategies, which are crucial for real-time data-driven decision-making.
Key Takeaways
- Introduces a new algorithm for Multi-Armed Bandit problems with Gaussian rewards.
- Demonstrates lower regret bounds using a hierarchical structure.
- Validates theoretical findings with numerical experiments.
- Applicable to real-world scenarios like communications and finance.
- Enhances existing Thompson Sampling methods for better performance.
Computer Science > Machine Learning arXiv:2602.15972 (cs) [Submitted on 17 Feb 2026] Title:Fast Online Learning with Gaussian Prior-Driven Hierarchical Unimodal Thompson Sampling Authors:Tianchi Zhao, He Liu, Hongyin Shi, Jinliang Li View a PDF of the paper titled Fast Online Learning with Gaussian Prior-Driven Hierarchical Unimodal Thompson Sampling, by Tianchi Zhao and He Liu and Hongyin Shi and Jinliang Li View PDF HTML (experimental) Abstract:We study a type of Multi-Armed Bandit (MAB) problems in which arms with a Gaussian reward feedback are clustered. Such an arm setting finds applications in many real-world problems, for example, mmWave communications and portfolio management with risky assets, as a result of the universality of the Gaussian distribution. Based on the Thompson Sampling algorithm with Gaussian prior (TSG) algorithm for the selection of the optimal arm, we propose our Thompson Sampling with Clustered arms under Gaussian prior (TSCG) specific to the 2-level hierarchical structure. We prove that by utilizing the 2-level structure, we can achieve a lower regret bound than we do with ordinary TSG. In addition, when the reward is Unimodal, we can reach an even lower bound on the regret by our Unimodal Thompson Sampling algorithm with Clustered Arms under Gaussian prior (UTSCG). Each of our proposed algorithms are accompanied by theoretical evaluation of the upper regret bound, and our numerical experiments confirm the advantage of our proposed algorithms....