[2505.14042] Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners

[2505.14042] Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2505.14042: Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners

Computer Science > Machine Learning arXiv:2505.14042 (cs) [Submitted on 20 May 2025 (v1), last revised 1 Mar 2026 (this version, v3)] Title:Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners Authors:Soichiro Kumano, Hiroshi Kera, Toshihiko Yamasaki View a PDF of the paper titled Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners, by Soichiro Kumano and 2 other authors View PDF Abstract:Adversarial training is one of the most effective defenses against adversarial attacks, but it incurs a high computational cost. In this study, we present the first theoretical analysis suggesting that adversarially pretrained transformers can serve as universally robust foundation models -- models that can adapt robustly to diverse downstream tasks with only lightweight tuning. Specifically, we demonstrate that single-layer linear transformers, after adversarial pretraining across a variety of classification tasks, can generalize robustly to unseen classification tasks through in-context learning from clean demonstrations (i.e., without requiring additional adversarial training or examples). This universal robustness stems from the model's ability to adaptively focus on robust features within given tasks. We also identify two open challenges for attaining robustness: the accuracy-robustness trade-off and sample-hungry training. This study initiates the discussion on the utility of universally robust foundation models. While t...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

Claude code x n8n

Hi everyone, I’ve been exploring MCP and integrating tools like n8n with Claude Code, and I’m trying to understand how practical this rea...

Reddit - Artificial Intelligence · 1 min ·
Llms

LLM comprehension question

Basically, does anyone else also get a really strange sense of lingering confusion and non-comprehension when an LLM explains a complex c...

Reddit - Artificial Intelligence · 1 min ·
Llms

Curated 550+ free AI tools useful for building projects (LLMs, APIs, local models, RAG, agents)

Over the last few days I was collecting free or low cost AI tools that are actually useful if you want to build stuff, not just try rando...

Reddit - Artificial Intelligence · 1 min ·
Claude Mythos and misguided open-weight fearmongering
Llms

Claude Mythos and misguided open-weight fearmongering

AI Tools & Products · 9 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime