[2602.20706] Online Algorithms with Unreliable Guidance
Summary
This paper presents a novel model for online decision-making called Online Algorithms with Unreliable Guidance (OAG), which separates predictive and algorithmic components to enhance competitiveness in uncertain environments.
Why It Matters
The OAG model addresses the challenges of decision-making under uncertainty, particularly in machine learning contexts. By providing a systematic method to transform existing algorithms, it enhances their robustness and consistency, which is crucial for applications in AI and data science.
Key Takeaways
- Introduces the OAG model for online decision-making.
- Separates predictive and algorithmic components for better analysis.
- Presents the DTB compiler to enhance existing algorithms.
- Demonstrates optimal performance for classic online problems.
- Establishes a framework for understanding algorithmic robustness.
Computer Science > Artificial Intelligence arXiv:2602.20706 (cs) [Submitted on 24 Feb 2026] Title:Online Algorithms with Unreliable Guidance Authors:Julien Dallot, Yuval Emek, Yuval Gil, Maciej Pacut, Stefan Schmid View a PDF of the paper titled Online Algorithms with Unreliable Guidance, by Julien Dallot and 3 other authors View PDF HTML (experimental) Abstract:This paper introduces a new model for ML-augmented online decision making, called online algorithms with unreliable guidance (OAG). This model completely separates between the predictive and algorithmic components, thus offering a single well-defined analysis framework that relies solely on the considered problem. Formulated through the lens of request-answer games, an OAG algorithm receives, with each incoming request, a piece of guidance which is taken from the problem's answer space; ideally, this guidance is the optimal answer for the current request, however with probability $\beta$, the guidance is adversarially corrupted. The goal is to develop OAG algorithms that admit good competitiveness when $\beta = 0$ (a.k.a. consistency) as well as when $\beta = 1$ (a.k.a. robustness); the appealing notion of smoothness, that in most prior work required a dedicated loss function, now arises naturally as $\beta$ shifts from $0$ to $1$. We then describe a systematic method, called the drop or trust blindly (DTB) compiler, which transforms any online algorithm into a learning-augmented online algorithm in the OAG model. ...