[2604.00324] The Persistent Vulnerability of Aligned AI Systems
About this article
Abstract page for arXiv paper 2604.00324: The Persistent Vulnerability of Aligned AI Systems
Computer Science > Machine Learning arXiv:2604.00324 (cs) [Submitted on 31 Mar 2026] Title:The Persistent Vulnerability of Aligned AI Systems Authors:Aengus Lynch View a PDF of the paper titled The Persistent Vulnerability of Aligned AI Systems, by Aengus Lynch View PDF HTML (experimental) Abstract:Autonomous AI agents are being deployed with filesystem access, email control, and multi-step planning. This thesis contributes to four open problems in AI safety: understanding dangerous internal computations, removing dangerous behaviors once embedded, testing for vulnerabilities before deployment, and predicting when models will act against deployers. ACDC automates circuit discovery in transformers, recovering all five component types from prior manual work on GPT-2 Small by selecting 68 edges from 32,000 candidates in hours rather than months. Latent Adversarial Training (LAT) removes dangerous behaviors by optimizing perturbations in the residual stream to elicit failure modes, then training under those perturbations. LAT solved the sleeper agent problem where standard safety training failed, matching existing defenses with 700x fewer GPU hours. Best-of-N jailbreaking achieves 89% attack success on GPT-4o and 78% on Claude 3.5 Sonnet through random input augmentations. Attack success follows power law scaling across text, vision, and audio, enabling quantitative forecasting of adversarial robustness. Agentic misalignment tests whether frontier models autonomously choose ha...