[2603.24511] Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attack Algorithms for LLMs
About this article
Abstract page for arXiv paper 2603.24511: Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attack Algorithms for LLMs
Computer Science > Machine Learning arXiv:2603.24511 (cs) [Submitted on 25 Mar 2026] Title:Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attack Algorithms for LLMs Authors:Alexander Panfilov, Peter Romov, Igor Shilov, Yves-Alexandre de Montjoye, Jonas Geiping, Maksym Andriushchenko View a PDF of the paper titled Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attack Algorithms for LLMs, by Alexander Panfilov and 5 other authors View PDF HTML (experimental) Abstract:LLM agents like Claude Code can not only write code but also be used for autonomous AI research and engineering \citep{rank2026posttrainbench, novikov2025alphaevolve}. We show that an \emph{autoresearch}-style pipeline \citep{karpathy2026autoresearch} powered by Claude Code discovers novel white-box adversarial attack \textit{algorithms} that \textbf{significantly outperform all existing (30+) methods} in jailbreaking and prompt injection evaluations. Starting from existing attack implementations, such as GCG~\citep{zou2023universal}, the agent iterates to produce new algorithms achieving up to 40\% attack success rate on CBRN queries against GPT-OSS-Safeguard-20B, compared to $\leq$10\% for existing algorithms (\Cref{fig:teaser}, left). The discovered algorithms generalize: attacks optimized on surrogate models transfer directly to held-out models, achieving \textbf{100\% ASR against Meta-SecAlign-70B} \citep{chen2025secalign} versus 56\% for the best baseline (\Cref{fig:teaser}, m...