[2604.01346] Safety, Security, and Cognitive Risks in World Models
About this article
Abstract page for arXiv paper 2604.01346: Safety, Security, and Cognitive Risks in World Models
Computer Science > Cryptography and Security arXiv:2604.01346 (cs) [Submitted on 1 Apr 2026] Title:Safety, Security, and Cognitive Risks in World Models Authors:Manoj Parmar View a PDF of the paper titled Safety, Security, and Cognitive Risks in World Models, by Manoj Parmar View PDF HTML (experimental) Abstract:World models -- learned internal simulators of environment dynamics -- are rapidly becoming foundational to autonomous decision-making in robotics, autonomous vehicles, and agentic AI. Yet this predictive power introduces a distinctive set of safety, security, and cognitive risks. Adversaries can corrupt training data, poison latent representations, and exploit compounding rollout errors to cause catastrophic failures in safety-critical deployments. World model-equipped agents are more capable of goal misgeneralisation, deceptive alignment, and reward hacking precisely because they can simulate the consequences of their own actions. Authoritative world model predictions further foster automation bias and miscalibrated human trust that operators lack the tools to audit. This paper surveys the world model landscape; introduces formal definitions of trajectory persistence and representational risk; presents a five-profile attacker capability taxonomy; and develops a unified threat model extending MITRE ATLAS and the OWASP LLM Top 10 to the world model stack. We provide an empirical proof-of-concept on trajectory-persistent adversarial attacks (GRU-RSSM: A_1 = 2.26x am...