[2603.00565] MIDAS: Multi-Image Dispersion and Semantic Reconstruction for Jailbreaking MLLMs
About this article
Abstract page for arXiv paper 2603.00565: MIDAS: Multi-Image Dispersion and Semantic Reconstruction for Jailbreaking MLLMs
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.00565 (cs) [Submitted on 28 Feb 2026] Title:MIDAS: Multi-Image Dispersion and Semantic Reconstruction for Jailbreaking MLLMs Authors:Yilian Liu, Xiaojun Jia, Guoshun Nan, Jiuyang Lyu, Zhican Chen, Tao Guan, Shuyuan Luo, Zhongyi Zhai, Yang Liu View a PDF of the paper titled MIDAS: Multi-Image Dispersion and Semantic Reconstruction for Jailbreaking MLLMs, by Yilian Liu and 8 other authors View PDF HTML (experimental) Abstract:Multimodal Large Language Models (MLLMs) have achieved remarkable performance but remain vulnerable to jailbreak attacks that can induce harmful content and undermine their secure deployment. Previous studies have shown that introducing additional inference steps, which disrupt security attention, can make MLLMs more susceptible to being misled into generating malicious content. However, these methods rely on single-image masking or isolated visual cues, which only modestly extend reasoning paths and thus achieve limited effectiveness, particularly against strongly aligned commercial closed-source models. To address this problem, in this paper, we propose Multi-Image Dispersion and Semantic Reconstruction (MIDAS), a multimodal jailbreak framework that decomposes harmful semantics into risk-bearing subunits, disperses them across multiple visual clues, and leverages cross-image reasoning to gradually reconstruct the malicious intent, thereby bypassing existing safety mechanisms. The pr...