[2510.27623] BEAT: Visual Backdoor Attacks on VLM-based Embodied Agents via Contrastive Trigger Learning
Summary
The paper presents BEAT, a novel framework for executing visual backdoor attacks on Vision-Language Model (VLM)-based embodied agents, highlighting significant security vulnerabilities.
Why It Matters
As VLMs are increasingly integrated into real-world applications, understanding their vulnerabilities is crucial. BEAT reveals how visual triggers can manipulate agent behavior, emphasizing the need for robust security measures before deployment in sensitive environments.
Key Takeaways
- BEAT introduces a method for visual backdoor attacks using object triggers.
- The framework achieves up to 80% attack success rates while maintaining performance.
- Contrastive Trigger Learning (CTL) significantly improves backdoor activation accuracy.
- The study highlights critical security risks in VLM-based systems.
- Robust defenses are necessary before deploying these agents in real-world scenarios.
Computer Science > Artificial Intelligence arXiv:2510.27623 (cs) [Submitted on 31 Oct 2025 (v1), last revised 22 Feb 2026 (this version, v3)] Title:BEAT: Visual Backdoor Attacks on VLM-based Embodied Agents via Contrastive Trigger Learning Authors:Qiusi Zhan, Hyeonjeong Ha, Rui Yang, Sirui Xu, Hanyang Chen, Liang-Yan Gui, Yu-Xiong Wang, Huan Zhang, Heng Ji, Daniel Kang View a PDF of the paper titled BEAT: Visual Backdoor Attacks on VLM-based Embodied Agents via Contrastive Trigger Learning, by Qiusi Zhan and 9 other authors View PDF HTML (experimental) Abstract:Recent advances in Vision-Language Models (VLMs) have propelled embodied agents by enabling direct perception, reasoning, and planning task-oriented actions from visual inputs. However, such vision-driven embodied agents open a new attack surface: visual backdoor attacks, where the agent behaves normally until a visual trigger appears in the scene, then persistently executes an attacker-specified multi-step policy. We introduce BEAT, the first framework to inject such visual backdoors into VLM-based embodied agents using objects in the environments as triggers. Unlike textual triggers, object triggers exhibit wide variation across viewpoints and lighting, making them difficult to implant reliably. BEAT addresses this challenge by (1) constructing a training set that spans diverse scenes, tasks, and trigger placements to expose agents to trigger variability, and (2) introducing a two-stage training scheme that first ...