[2602.17345] What Breaks Embodied AI Security:LLM Vulnerabilities, CPS Flaws,or Something Else?

[2602.17345] What Breaks Embodied AI Security:LLM Vulnerabilities, CPS Flaws,or Something Else?

arXiv - AI 4 min read Article

Summary

This paper explores vulnerabilities in embodied AI systems, highlighting the inadequacy of existing analyses focused solely on LLMs or CPS failures. It identifies critical insights into the complexities of securing these systems in real-world applications.

Why It Matters

As embodied AI systems become more prevalent in safety-critical environments, understanding their vulnerabilities is essential. This research emphasizes the need for a holistic approach to security that considers system-level interactions and risks, which is crucial for developing safer AI technologies.

Key Takeaways

  • Embodied AI failures often stem from system-level mismatches rather than isolated flaws.
  • Semantic correctness in AI does not guarantee physical safety due to abstract reasoning.
  • Identical actions can yield different outcomes due to nonlinear dynamics and uncertainties.
  • Errors can propagate through tightly coupled perception-decision-action loops.
  • Safety considerations must encompass both time and system layers for effective risk management.

Computer Science > Cryptography and Security arXiv:2602.17345 (cs) [Submitted on 19 Feb 2026] Title:What Breaks Embodied AI Security:LLM Vulnerabilities, CPS Flaws,or Something Else? Authors:Boyang Ma, Hechuan Guo, Peizhuo Lv, Minghui Xu, Xuelong Dai, YeChao Zhang, Yijun Yang, Yue Zhang View a PDF of the paper titled What Breaks Embodied AI Security:LLM Vulnerabilities, CPS Flaws,or Something Else?, by Boyang Ma and Hechuan Guo and Peizhuo Lv and Minghui Xu and Xuelong Dai and YeChao Zhang and Yijun Yang and Yue Zhang View PDF HTML (experimental) Abstract:Embodied AI systems (e.g., autonomous vehicles, service robots, and LLM-driven interactive agents) are rapidly transitioning from controlled environments to safety critical real-world deployments. Unlike disembodied AI, failures in embodied intelligence lead to irreversible physical consequences, raising fundamental questions about security, safety, and reliability. While existing research predominantly analyzes embodied AI through the lenses of Large Language Model (LLM) vulnerabilities or classical Cyber-Physical System (CPS) failures, this survey argues that these perspectives are individually insufficient to explain many observed breakdowns in modern embodied systems. We posit that a significant class of failures arises from embodiment-induced system-level mismatches, rather than from isolated model flaws or traditional CPS attacks. Specifically, we identify four core insights that explain why embodied AI is fundament...

Related Articles

Llms

Artificial intelligence will always depends on human otherwise it will be obsolete.

I was looking for a tool for my specific need. There was not any. So i started to write the program in python, just basic structure. Then...

Reddit - Artificial Intelligence · 1 min ·
Llms

My AI spent last night modifying its own codebase

I've been working on a local AI system called Apis that runs completely offline through Ollama. During a background run, Apis identified ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Fake users generated by AI can't simulate humans — review of 182 research papers. Your thoughts?

https://www.researchsquare.com/article/rs-9057643/v1 There’s a massive trend right now where tech companies, businesses, even researchers...

Reddit - Artificial Intelligence · 1 min ·
Llms

Depth-first pruning seems to transfer from GPT-2 to Llama (unexpectedly well)

TL;DR: Removing the right transformer layers (instead of shrinking all layers) gives smaller, faster models with minimal quality loss — a...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime