[2604.00310] Robust Multimodal Safety via Conditional Decoding
About this article
Abstract page for arXiv paper 2604.00310: Robust Multimodal Safety via Conditional Decoding
Computer Science > Machine Learning arXiv:2604.00310 (cs) [Submitted on 31 Mar 2026] Title:Robust Multimodal Safety via Conditional Decoding Authors:Anurag Kumar, Raghuveer Peri, Jon Burnsky, Alexandru Nelus, Rohit Paturi, Srikanth Vishnubhotla, Yanjun Qi View a PDF of the paper titled Robust Multimodal Safety via Conditional Decoding, by Anurag Kumar and 6 other authors View PDF HTML (experimental) Abstract:Multimodal large-language models (MLLMs) often experience degraded safety alignment when harmful queries exploit cross-modal interactions. Models aligned on text alone show a higher rate of successful attacks when extended to two or more modalities. In this work, we propose a simple conditional decoding strategy, CASA (Classification Augmented with Safety Attention) that utilizes internal representations of MLLMs to predict a binary safety token before response generation. We introduce a novel safety attention module designed to enhance the model's ability to detect malicious queries. Our design ensures robust safety alignment without relying on any external classifier or auxiliary head, and without the need for modality-specific safety fine-tuning. On diverse benchmarks such as MM-SafetyBench, JailbreakV-28k, and adversarial audio tests, CASA lowers the average attack success rate by more than 97% across modalities and across attack types. Our empirical evaluations also show that CASA maintains strong utility in benign inputs, a result validated through both automated...