[2505.14042] Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners
About this article
Abstract page for arXiv paper 2505.14042: Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners
Computer Science > Machine Learning arXiv:2505.14042 (cs) [Submitted on 20 May 2025 (v1), last revised 1 Mar 2026 (this version, v3)] Title:Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners Authors:Soichiro Kumano, Hiroshi Kera, Toshihiko Yamasaki View a PDF of the paper titled Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners, by Soichiro Kumano and 2 other authors View PDF Abstract:Adversarial training is one of the most effective defenses against adversarial attacks, but it incurs a high computational cost. In this study, we present the first theoretical analysis suggesting that adversarially pretrained transformers can serve as universally robust foundation models -- models that can adapt robustly to diverse downstream tasks with only lightweight tuning. Specifically, we demonstrate that single-layer linear transformers, after adversarial pretraining across a variety of classification tasks, can generalize robustly to unseen classification tasks through in-context learning from clean demonstrations (i.e., without requiring additional adversarial training or examples). This universal robustness stems from the model's ability to adaptively focus on robust features within given tasks. We also identify two open challenges for attaining robustness: the accuracy-robustness trade-off and sample-hungry training. This study initiates the discussion on the utility of universally robust foundation models. While t...