[2511.15315] Robust Bayesian Optimisation with Unbounded Corruptions
Summary
The paper introduces RCGP-UCB, a robust Bayesian optimization algorithm designed to handle extreme outliers by allowing unbounded corruption magnitudes, achieving sublinear regret in challenging conditions.
Why It Matters
This research addresses a significant limitation in Bayesian optimization, where existing methods fail against extreme outliers. By proposing a new algorithm that maintains performance even with unbounded corruptions, it enhances the reliability of machine learning models in real-world applications, making it crucial for practitioners in fields requiring robust optimization.
Key Takeaways
- RCGP-UCB algorithm combines upper confidence bounds with robust Gaussian processes.
- It effectively handles up to O(T^{1/4}) and O(T^{1/7}) corruptions with infinite magnitude.
- The algorithm's performance matches standard GP-UCB in the absence of outliers.
- This advancement can significantly improve the robustness of machine learning applications.
- The research provides a theoretical foundation for future work in robust optimization.
Statistics > Machine Learning arXiv:2511.15315 (stat) [Submitted on 19 Nov 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Robust Bayesian Optimisation with Unbounded Corruptions Authors:Abdelhamid Ezzerg, Ilija Bogunovic, Jeremias Knoblauch View a PDF of the paper titled Robust Bayesian Optimisation with Unbounded Corruptions, by Abdelhamid Ezzerg and 2 other authors View PDF HTML (experimental) Abstract:Bayesian Optimization is critically vulnerable to extreme outliers. Existing provably robust methods typically assume a bounded cumulative corruption budget, which makes them defenseless against even a single corruption of sufficient magnitude. To address this, we introduce a new adversary whose budget is only bounded in the frequency of corruptions, not in their magnitude. We then derive RCGP-UCB, an algorithm coupling the famous upper confidence bound (UCB) approach with a Robust Conjugate Gaussian Process (RCGP). We present stable and adaptive versions of RCGP-UCB, and prove that they achieve sublinear regret in the presence of up to $O(T^{1/4})$ and $O(T^{1/7})$ corruptions with possibly infinite magnitude. This robustness comes at near zero cost: without outliers, RCGP-UCB's regret bounds match those of the standard GP-UCB algorithm. Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG) Cite as: arXiv:2511.15315 [stat.ML] (or arXiv:2511.15315v2 [stat.ML] for this version) https://doi.org/10.48550/arXiv.2511.15315 Focus to learn more arXiv-i...