[2603.16960] Adversarial attacks against Modern Vision-Language Models
About this article
Abstract page for arXiv paper 2603.16960: Adversarial attacks against Modern Vision-Language Models
Computer Science > Cryptography and Security arXiv:2603.16960 (cs) [Submitted on 17 Mar 2026 (v1), last revised 22 Mar 2026 (this version, v2)] Title:Adversarial attacks against Modern Vision-Language Models Authors:Alejandro Paredes La Torre View a PDF of the paper titled Adversarial attacks against Modern Vision-Language Models, by Alejandro Paredes La Torre View PDF HTML (experimental) Abstract:We study adversarial robustness of open-source vision-language model (VLM) agents deployed in a self-contained e-commerce environment built to simulate realistic pre-deployment conditions. We evaluate two agents, LLaVA-v1.5-7B and Qwen2.5-VL-7B, under three gradient-based attacks: the Basic Iterative Method (BIM), Projected Gradient Descent (PGD), and a CLIP-based spectral attack. Against LLaVA, all three attacks achieve substantial attack success rates (52.6%, 53.8%, and 66.9% respectively), demonstrating that simple gradient-based methods pose a practical threat to open-source VLM agents. Qwen2.5-VL proves significantly more robust across all attacks (6.5%, 7.7%, and 15.5%), suggesting meaningful architectural differences in adversarial resilience between open-source VLM families. These findings have direct implications for the security evaluation of VLM agents prior to commercial deployment. Subjects: Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.16960 [cs.CR] (or arXiv:2603.16960v2 [cs.CR] for this version) https://doi.org/10.48550...