[2511.15315] Robust Bayesian Optimisation with Unbounded Corruptions

[2511.15315] Robust Bayesian Optimisation with Unbounded Corruptions

arXiv - Machine Learning 3 min read Article

Summary

The paper introduces RCGP-UCB, a robust Bayesian optimization algorithm designed to handle extreme outliers by allowing unbounded corruption magnitudes, achieving sublinear regret in challenging conditions.

Why It Matters

This research addresses a significant limitation in Bayesian optimization, where existing methods fail against extreme outliers. By proposing a new algorithm that maintains performance even with unbounded corruptions, it enhances the reliability of machine learning models in real-world applications, making it crucial for practitioners in fields requiring robust optimization.

Key Takeaways

  • RCGP-UCB algorithm combines upper confidence bounds with robust Gaussian processes.
  • It effectively handles up to O(T^{1/4}) and O(T^{1/7}) corruptions with infinite magnitude.
  • The algorithm's performance matches standard GP-UCB in the absence of outliers.
  • This advancement can significantly improve the robustness of machine learning applications.
  • The research provides a theoretical foundation for future work in robust optimization.

Statistics > Machine Learning arXiv:2511.15315 (stat) [Submitted on 19 Nov 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Robust Bayesian Optimisation with Unbounded Corruptions Authors:Abdelhamid Ezzerg, Ilija Bogunovic, Jeremias Knoblauch View a PDF of the paper titled Robust Bayesian Optimisation with Unbounded Corruptions, by Abdelhamid Ezzerg and 2 other authors View PDF HTML (experimental) Abstract:Bayesian Optimization is critically vulnerable to extreme outliers. Existing provably robust methods typically assume a bounded cumulative corruption budget, which makes them defenseless against even a single corruption of sufficient magnitude. To address this, we introduce a new adversary whose budget is only bounded in the frequency of corruptions, not in their magnitude. We then derive RCGP-UCB, an algorithm coupling the famous upper confidence bound (UCB) approach with a Robust Conjugate Gaussian Process (RCGP). We present stable and adaptive versions of RCGP-UCB, and prove that they achieve sublinear regret in the presence of up to $O(T^{1/4})$ and $O(T^{1/7})$ corruptions with possibly infinite magnitude. This robustness comes at near zero cost: without outliers, RCGP-UCB's regret bounds match those of the standard GP-UCB algorithm. Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG) Cite as: arXiv:2511.15315 [stat.ML]   (or arXiv:2511.15315v2 [stat.ML] for this version)   https://doi.org/10.48550/arXiv.2511.15315 Focus to learn more arXiv-i...

Related Articles

Machine Learning

FYI the Tennessee bill makes making an AI friend the same level as murder or aggravated rape

I think what Tennessee is doing is they recently passed SB 1580, which makes it illegal to even advertise that an AI can act as a mental ...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[P] A control plane for post-training workflows

We have been exploring a project around post-training infrastructure, a minimalist tool that does one thing really well: Make post-traini...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Is this considered unsupervised or semi-supervised learning in anomaly detection?

Hi šŸ‘‹šŸ¼, I’m working on an anomaly detection setup and I’m a bit unsure how to correctly describe it from a learning perspective. The model...

Reddit - Machine Learning · 1 min ·
Machine Learning

Serious question. Did a transformer just describe itself and the universe and build itself a Shannon limit framework?

The Multiplicative Lattice as the Natural Basis for Positional Encoding Knack 2026 | Draft v6.0 Abstract We show that the apparent tradeo...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime