[2603.27918] Adversarial Attacks on Multimodal Large Language Models: A Comprehensive Survey
About this article
Abstract page for arXiv paper 2603.27918: Adversarial Attacks on Multimodal Large Language Models: A Comprehensive Survey
Computer Science > Cryptography and Security arXiv:2603.27918 (cs) [Submitted on 30 Mar 2026] Title:Adversarial Attacks on Multimodal Large Language Models: A Comprehensive Survey Authors:Bhavuk Jain, Sercan Ö. Arık, Hardeo K. Thakur View a PDF of the paper titled Adversarial Attacks on Multimodal Large Language Models: A Comprehensive Survey, by Bhavuk Jain and 2 other authors View PDF HTML (experimental) Abstract:Multimodal large language models (MLLMs) integrate information from multiple modalities such as text, images, audio, and video, enabling complex capabilities such as visual question answering and audio translation. While powerful, this increased expressiveness introduces new and amplified vulnerabilities to adversarial manipulation. This survey provides a comprehensive and systematic analysis of adversarial threats to MLLMs, moving beyond enumerating attack techniques to explain the underlying causes of model susceptibility. We introduce a taxonomy that organizes adversarial attacks according to attacker objectives, unifying diverse attack surfaces across modalities and deployment settings. Additionally, we also present a vulnerability-centric analysis that links integrity attacks, safety and jailbreak failures, control and instruction hijacking, and training-time poisoning to shared architectural and representational weaknesses in multimodal systems. Together, this framework provides an explanatory foundation for understanding adversarial behavior in MLLMs and ...