[2603.24857] AI Security in the Foundation Model Era: A Comprehensive Survey from a Unified Perspective
About this article
Abstract page for arXiv paper 2603.24857: AI Security in the Foundation Model Era: A Comprehensive Survey from a Unified Perspective
Computer Science > Cryptography and Security arXiv:2603.24857 (cs) [Submitted on 25 Mar 2026] Title:AI Security in the Foundation Model Era: A Comprehensive Survey from a Unified Perspective Authors:Zhenyi Wang, Siyu Luan View a PDF of the paper titled AI Security in the Foundation Model Era: A Comprehensive Survey from a Unified Perspective, by Zhenyi Wang and 1 other authors View PDF HTML (experimental) Abstract:As machine learning (ML) systems expand in both scale and functionality, the security landscape has become increasingly complex, with a proliferation of attacks and defenses. However, existing studies largely treat these threats in isolation, lacking a coherent framework to expose their shared principles and interdependencies. This fragmented view hinders systematic understanding and limits the design of comprehensive defenses. Crucially, the two foundational assets of ML -- \textbf{data} and \textbf{models} -- are no longer independent; vulnerabilities in one directly compromise the other. The absence of a holistic framework leaves open questions about how these bidirectional risks propagate across the ML pipeline. To address this critical gap, we propose a \emph{unified closed-loop threat taxonomy} that explicitly frames model-data interactions along four directional axes. Our framework offers a principled lens for analyzing and defending foundation models. The resulting four classes of security threats represent distinct but interrelated categories of attacks:...