[2510.27623] BEAT: Visual Backdoor Attacks on VLM-based Embodied Agents via Contrastive Trigger Learning

[2510.27623] BEAT: Visual Backdoor Attacks on VLM-based Embodied Agents via Contrastive Trigger Learning

arXiv - AI 4 min read Article

Summary

The paper presents BEAT, a novel framework for executing visual backdoor attacks on Vision-Language Model (VLM)-based embodied agents, highlighting significant security vulnerabilities.

Why It Matters

As VLMs are increasingly integrated into real-world applications, understanding their vulnerabilities is crucial. BEAT reveals how visual triggers can manipulate agent behavior, emphasizing the need for robust security measures before deployment in sensitive environments.

Key Takeaways

  • BEAT introduces a method for visual backdoor attacks using object triggers.
  • The framework achieves up to 80% attack success rates while maintaining performance.
  • Contrastive Trigger Learning (CTL) significantly improves backdoor activation accuracy.
  • The study highlights critical security risks in VLM-based systems.
  • Robust defenses are necessary before deploying these agents in real-world scenarios.

Computer Science > Artificial Intelligence arXiv:2510.27623 (cs) [Submitted on 31 Oct 2025 (v1), last revised 22 Feb 2026 (this version, v3)] Title:BEAT: Visual Backdoor Attacks on VLM-based Embodied Agents via Contrastive Trigger Learning Authors:Qiusi Zhan, Hyeonjeong Ha, Rui Yang, Sirui Xu, Hanyang Chen, Liang-Yan Gui, Yu-Xiong Wang, Huan Zhang, Heng Ji, Daniel Kang View a PDF of the paper titled BEAT: Visual Backdoor Attacks on VLM-based Embodied Agents via Contrastive Trigger Learning, by Qiusi Zhan and 9 other authors View PDF HTML (experimental) Abstract:Recent advances in Vision-Language Models (VLMs) have propelled embodied agents by enabling direct perception, reasoning, and planning task-oriented actions from visual inputs. However, such vision-driven embodied agents open a new attack surface: visual backdoor attacks, where the agent behaves normally until a visual trigger appears in the scene, then persistently executes an attacker-specified multi-step policy. We introduce BEAT, the first framework to inject such visual backdoors into VLM-based embodied agents using objects in the environments as triggers. Unlike textual triggers, object triggers exhibit wide variation across viewpoints and lighting, making them difficult to implant reliably. BEAT addresses this challenge by (1) constructing a training set that spans diverse scenes, tasks, and trigger placements to expose agents to trigger variability, and (2) introducing a two-stage training scheme that first ...

Related Articles

Llms

Why are we blindly trusting AI companies with our data?

Lately I’ve been seeing a story floating around that really made me pause. Apparently, there were claims that the US government asked Ant...

Reddit - Artificial Intelligence · 1 min ·
De-aged casts, ChatGPT-generated programs: How AI is changing Korean TV
Llms

De-aged casts, ChatGPT-generated programs: How AI is changing Korean TV

Artificial intelligence is transforming every corner of industry, and television is no exception. Major networks in Korea have recently a...

AI Tools & Products · 4 min ·
[2603.16629] MLLM-based Textual Explanations for Face Comparison
Llms

[2603.16629] MLLM-based Textual Explanations for Face Comparison

Abstract page for arXiv paper 2603.16629: MLLM-based Textual Explanations for Face Comparison

arXiv - AI · 4 min ·
[2603.15159] To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation
Llms

[2603.15159] To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

Abstract page for arXiv paper 2603.15159: To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

arXiv - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime