[2602.12901] Blessings of Multiple Good Arms in Multi-Objective Linear Bandits
Summary
This paper explores the multi-objective linear bandit problem, revealing that multiple good arms can lead to implicit exploration, enhancing algorithm performance without distributional assumptions.
Why It Matters
Understanding the dynamics of multi-objective linear bandits is crucial for optimizing decision-making processes in various applications, including machine learning and AI. This study challenges traditional views and introduces a novel framework for analyzing fairness in these algorithms, which can significantly impact the development of more efficient and equitable AI systems.
Key Takeaways
- Multiple good arms can induce implicit exploration in multi-objective settings.
- Simple greedy algorithms can achieve strong performance in these scenarios.
- The study introduces a framework for analyzing Pareto fairness in bandit algorithms.
- No distributional assumptions are required for the proposed methods.
- This research contributes to the understanding of complex decision-making in AI.
Statistics > Machine Learning arXiv:2602.12901 (stat) [Submitted on 13 Feb 2026] Title:Blessings of Multiple Good Arms in Multi-Objective Linear Bandits Authors:Heesang Ann, Min-hwan Oh View a PDF of the paper titled Blessings of Multiple Good Arms in Multi-Objective Linear Bandits, by Heesang Ann and Min-hwan Oh View PDF Abstract:The multi objective bandit setting has traditionally been regarded as more complex than the single objective case, as multiple objectives must be optimized simultaneously. In contrast to this prevailing view, we demonstrate that when multiple good arms exist for multiple objectives, they can induce a surprising benefit, implicit exploration. Under this condition, we show that simple algorithms that greedily select actions in most rounds can nonetheless achieve strong performance, both theoretically and empirically. To our knowledge, this is the first study to introduce implicit exploration in both multi objective and parametric bandit settings without any distributional assumptions on the contexts. We further introduce a framework for effective Pareto fairness, which provides a principled approach to rigorously analyzing fairness of multi objective bandit algorithms. Comments: Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG) Cite as: arXiv:2602.12901 [stat.ML] (or arXiv:2602.12901v1 [stat.ML] for this version) https://doi.org/10.48550/arXiv.2602.12901 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submi...