[2603.03637] Image-based Prompt Injection: Hijacking Multimodal LLMs through Visually Embedded Adversarial Instructions
About this article
Abstract page for arXiv paper 2603.03637: Image-based Prompt Injection: Hijacking Multimodal LLMs through Visually Embedded Adversarial Instructions
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.03637 (cs) [Submitted on 4 Mar 2026] Title:Image-based Prompt Injection: Hijacking Multimodal LLMs through Visually Embedded Adversarial Instructions Authors:Neha Nagaraja, Lan Zhang, Zhilong Wang, Bo Zhang, Pawan Patil View a PDF of the paper titled Image-based Prompt Injection: Hijacking Multimodal LLMs through Visually Embedded Adversarial Instructions, by Neha Nagaraja and 4 other authors View PDF HTML (experimental) Abstract:Multimodal Large Language Models (MLLMs) integrate vision and text to power applications, but this integration introduces new vulnerabilities. We study Image-based Prompt Injection (IPI), a black-box attack in which adversarial instructions are embedded into natural images to override model behavior. Our end-to-end IPI pipeline incorporates segmentation-based region selection, adaptive font scaling, and background-aware rendering to conceal prompts from human perception while preserving model interpretability. Using the COCO dataset and GPT-4-turbo, we evaluate 12 adversarial prompt strategies and multiple embedding configurations. The results show that IPI can reliably manipulate the output of the model, with the most effective configuration achieving up to 64\% attack success under stealth constraints. These findings highlight IPI as a practical threat in black-box settings and underscore the need for defenses against multimodal prompt injection. Comments: Subjects: Computer V...