[2605.06187] In-Context Black-Box Optimization with Unreliable Feedback

[2605.06187] In-Context Black-Box Optimization with Unreliable Feedback

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2605.06187: In-Context Black-Box Optimization with Unreliable Feedback

Computer Science > Machine Learning arXiv:2605.06187 (cs) [Submitted on 7 May 2026] Title:In-Context Black-Box Optimization with Unreliable Feedback Authors:Nicolas Samuel Blumer, Julien Martinelli, Samuel Kaski View a PDF of the paper titled In-Context Black-Box Optimization with Unreliable Feedback, by Nicolas Samuel Blumer and 2 other authors View PDF HTML (experimental) Abstract:Black-box optimization in science and engineering often comes with side information: experts, simulators, pretrained predictors, or heuristics can suggest which candidates look promising. This information can accelerate search, but it can also be biased, input-dependent, or misleading. Feedback-aware BO methods typically handle one task at a time, limiting their ability to generalize over multiple sources of feedback. In-context optimizers address cross-task adaptation, but usually assume that optimization history is the only available signal at test time. We study feedback-informed in-context black-box optimization (FICBO), where a pretrained optimizer conditions on both the observed history and cheap auxiliary feedback for the current candidate set. We introduce a structured feedback prior that models how feedback sources vary in their access, relevance, and distortion relative to the true objective, and use it to pretrain a feedback-aware transformer. At test time, the model estimates source reliability in context by comparing observed objective values with auxiliary signals, improving query...

Originally published on May 08, 2026. Curated by AI News.

Related Articles

Implementing advanced AI technologies in finance | MIT Technology Review
Ai Safety

Implementing advanced AI technologies in finance | MIT Technology Review

In finance departments that have long been defined by precision and control, AI has arrived less as a neatly managed upgrade than as a qu...

MIT Technology Review · 4 min ·
[2602.07026] Modality Gap-Driven Subspace Alignment Training Paradigm For Multimodal Large Language Models
Llms

[2602.07026] Modality Gap-Driven Subspace Alignment Training Paradigm For Multimodal Large Language Models

Abstract page for arXiv paper 2602.07026: Modality Gap-Driven Subspace Alignment Training Paradigm For Multimodal Large Language Models

arXiv - AI · 4 min ·
[2511.22893] Switching-time bioprocess control with pulse-width-modulated optogenetics
Machine Learning

[2511.22893] Switching-time bioprocess control with pulse-width-modulated optogenetics

Abstract page for arXiv paper 2511.22893: Switching-time bioprocess control with pulse-width-modulated optogenetics

arXiv - AI · 4 min ·
[2407.04183] Seeing Like an AI: How LLMs Apply (and Misapply) Wikipedia Neutrality Norms
Llms

[2407.04183] Seeing Like an AI: How LLMs Apply (and Misapply) Wikipedia Neutrality Norms

Abstract page for arXiv paper 2407.04183: Seeing Like an AI: How LLMs Apply (and Misapply) Wikipedia Neutrality Norms

arXiv - AI · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime