[2603.20198] Visual Exclusivity Attacks: Automatic Multimodal Red Teaming via Agentic Planning

[2603.20198] Visual Exclusivity Attacks: Automatic Multimodal Red Teaming via Agentic Planning

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.20198: Visual Exclusivity Attacks: Automatic Multimodal Red Teaming via Agentic Planning

Computer Science > Cryptography and Security arXiv:2603.20198 (cs) [Submitted on 5 Feb 2026] Title:Visual Exclusivity Attacks: Automatic Multimodal Red Teaming via Agentic Planning Authors:Yunbei Zhang, Yingqiang Ge, Weijie Xu, Yuhui Xu, Jihun Hamm, Chandan K. Reddy View a PDF of the paper titled Visual Exclusivity Attacks: Automatic Multimodal Red Teaming via Agentic Planning, by Yunbei Zhang and 5 other authors View PDF HTML (experimental) Abstract:Current multimodal red teaming treats images as wrappers for malicious payloads via typography or adversarial noise. These attacks are structurally brittle, as standard defenses neutralize them once the payload is exposed. We introduce Visual Exclusivity (VE), a more resilient Image-as-Basis threat where harm emerges only through reasoning over visual content such as technical schematics. To systematically exploit VE, we propose Multimodal Multi-turn Agentic Planning (MM-Plan), a framework that reframes jailbreaking from turn-by-turn reaction to global plan synthesis. MM-Plan trains an attacker planner to synthesize comprehensive, multi-turn strategies, optimized via Group Relative Policy Optimization (GRPO), enabling self-discovery of effective strategies without human supervision. To rigorously benchmark this reasoning-dependent threat, we introduce VE-Safety, a human-curated dataset filling a critical gap in evaluating high-risk technical visual understanding. MM-Plan achieves 46.3% attack success rate against Claude 4.5 So...

Originally published on March 24, 2026. Curated by AI News.

Related Articles

Ai Safety

Bias in AI: Examples and 6 Ways to Fix it in 2026

AI bias is an anomaly in the output of ML algorithms due to prejudiced assumptions. Explore types of AI bias, examples, how to reduce bia...

AI Events · 36 min ·
Llms

[R] I built a benchmark that catches LLMs breaking physics laws

I got tired of LLMs confidently giving wrong physics answers, so I built a benchmark that generates adversarial physics questions and gra...

Reddit - Machine Learning · 1 min ·
Machine Learning

We need to teach AI the essence of being human to reduce the risk of misalignment

One part of the alignment problem is that AI does not genuinely understand what it's like to live in the world, even though it can descri...

Reddit - Artificial Intelligence · 1 min ·
California’s New AI Regulations Take Effect Oct. 1: Here’s Your Compliance Checklist
Ai Safety

California’s New AI Regulations Take Effect Oct. 1: Here’s Your Compliance Checklist

California's new regulations on automated decision systems take effect on October 1, affecting all employers and requiring compliance wit...

AI Events · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime