[2602.17743] Provable Adversarial Robustness in In-Context Learning

[2602.17743] Provable Adversarial Robustness in In-Context Learning

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a framework for ensuring adversarial robustness in in-context learning (ICL) for large language models, addressing the limitations of current theoretical models under distributional shifts.

Why It Matters

As large language models become integral in various applications, ensuring their reliability against adversarial attacks is crucial. This research provides foundational insights into improving model robustness, which can enhance the safety and effectiveness of AI systems in real-world scenarios.

Key Takeaways

  • Introduces a distributionally robust meta-learning framework for ICL.
  • Establishes a link between model capacity and adversarial robustness.
  • Demonstrates that robustness scales with model capacity and highlights sample complexity penalties in adversarial settings.

Computer Science > Machine Learning arXiv:2602.17743 (cs) [Submitted on 19 Feb 2026] Title:Provable Adversarial Robustness in In-Context Learning Authors:Di Zhang View a PDF of the paper titled Provable Adversarial Robustness in In-Context Learning, by Di Zhang View PDF HTML (experimental) Abstract:Large language models adapt to new tasks through in-context learning (ICL) without parameter updates. Current theoretical explanations for this capability assume test tasks are drawn from a distribution similar to that seen during pretraining. This assumption overlooks adversarial distribution shifts that threaten real-world reliability. To address this gap, we introduce a distributionally robust meta-learning framework that provides worst-case performance guarantees for ICL under Wasserstein-based distribution shifts. Focusing on linear self-attention Transformers, we derive a non-asymptotic bound linking adversarial perturbation strength ($\rho$), model capacity ($m$), and the number of in-context examples ($N$). The analysis reveals that model robustness scales with the square root of its capacity ($\rho_{\text{max}} \propto \sqrt{m}$), while adversarial settings impose a sample complexity penalty proportional to the square of the perturbation magnitude ($N_\rho - N_0 \propto \rho^2$). Experiments on synthetic tasks confirm these scaling laws. These findings advance the theoretical understanding of ICL's limits under adversarial conditions and suggest that model capacity serv...

Related Articles

Llms

OpenClaw security checklist: practical safeguards for AI agents

Here is one of the better quality guides on the ensuring safety when deploying OpenClaw: https://chatgptguide.ai/openclaw-security-checkl...

Reddit - Artificial Intelligence · 1 min ·
I let Gemini in Google Maps plan my day and it went surprisingly well | The Verge
Llms

I let Gemini in Google Maps plan my day and it went surprisingly well | The Verge

Gemini in Google Maps is a surprisingly useful way to explore new territory.

The Verge - AI · 11 min ·
Llms

The person who replaces you probably won't be AI. It'll be someone from the next department over who learned to use it - opinion/discussion

I'm a strategy person by background. Two years ago I'd write a recommendation and hand it to a product team. Now.. I describe what I want...

Reddit - Artificial Intelligence · 1 min ·
Block Resets Management With AI As Cash App Adds Installment Transfers
Llms

Block Resets Management With AI As Cash App Adds Installment Transfers

Block (NYSE:XYZ) plans a permanent organizational overhaul that replaces many middle management roles with AI-driven models to create fla...

AI Tools & Products · 5 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime