[2602.17345] What Breaks Embodied AI Security:LLM Vulnerabilities, CPS Flaws,or Something Else?
Summary
This paper explores vulnerabilities in embodied AI systems, highlighting the inadequacy of existing analyses focused solely on LLMs or CPS failures. It identifies critical insights into the complexities of securing these systems in real-world applications.
Why It Matters
As embodied AI systems become more prevalent in safety-critical environments, understanding their vulnerabilities is essential. This research emphasizes the need for a holistic approach to security that considers system-level interactions and risks, which is crucial for developing safer AI technologies.
Key Takeaways
- Embodied AI failures often stem from system-level mismatches rather than isolated flaws.
- Semantic correctness in AI does not guarantee physical safety due to abstract reasoning.
- Identical actions can yield different outcomes due to nonlinear dynamics and uncertainties.
- Errors can propagate through tightly coupled perception-decision-action loops.
- Safety considerations must encompass both time and system layers for effective risk management.
Computer Science > Cryptography and Security arXiv:2602.17345 (cs) [Submitted on 19 Feb 2026] Title:What Breaks Embodied AI Security:LLM Vulnerabilities, CPS Flaws,or Something Else? Authors:Boyang Ma, Hechuan Guo, Peizhuo Lv, Minghui Xu, Xuelong Dai, YeChao Zhang, Yijun Yang, Yue Zhang View a PDF of the paper titled What Breaks Embodied AI Security:LLM Vulnerabilities, CPS Flaws,or Something Else?, by Boyang Ma and Hechuan Guo and Peizhuo Lv and Minghui Xu and Xuelong Dai and YeChao Zhang and Yijun Yang and Yue Zhang View PDF HTML (experimental) Abstract:Embodied AI systems (e.g., autonomous vehicles, service robots, and LLM-driven interactive agents) are rapidly transitioning from controlled environments to safety critical real-world deployments. Unlike disembodied AI, failures in embodied intelligence lead to irreversible physical consequences, raising fundamental questions about security, safety, and reliability. While existing research predominantly analyzes embodied AI through the lenses of Large Language Model (LLM) vulnerabilities or classical Cyber-Physical System (CPS) failures, this survey argues that these perspectives are individually insufficient to explain many observed breakdowns in modern embodied systems. We posit that a significant class of failures arises from embodiment-induced system-level mismatches, rather than from isolated model flaws or traditional CPS attacks. Specifically, we identify four core insights that explain why embodied AI is fundament...