[2603.00529] CaptionFool: Universal Image Captioning Model Attacks
About this article
Abstract page for arXiv paper 2603.00529: CaptionFool: Universal Image Captioning Model Attacks
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.00529 (cs) [Submitted on 28 Feb 2026] Title:CaptionFool: Universal Image Captioning Model Attacks Authors:Swapnil Parekh View a PDF of the paper titled CaptionFool: Universal Image Captioning Model Attacks, by Swapnil Parekh View PDF HTML (experimental) Abstract:Image captioning models are encoder-decoder architectures trained on large-scale image-text datasets, making them susceptible to adversarial attacks. We present CaptionFool, a novel universal (input-agnostic) adversarial attack against state-of-the-art transformer-based captioning models. By modifying only 7 out of 577 image patches (approximately 1.2% of the image), our attack achieves 94-96% success rate in generating arbitrary target captions, including offensive content. We further demonstrate that CaptionFool can generate "slang" terms specifically designed to evade existing content moderation filters. Our findings expose critical vulnerabilities in deployed vision-language models and underscore the urgent need for robust defenses against such attacks. Warning: This paper contains model outputs which are offensive in nature. Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.00529 [cs.CV] (or arXiv:2603.00529v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2603.00529 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Swap...