[2603.01608] Evaluating and Understanding Scheming Propensity in LLM Agents
About this article
Abstract page for arXiv paper 2603.01608: Evaluating and Understanding Scheming Propensity in LLM Agents
Computer Science > Artificial Intelligence arXiv:2603.01608 (cs) [Submitted on 2 Mar 2026] Title:Evaluating and Understanding Scheming Propensity in LLM Agents Authors:Mia Hopman, Jannes Elstner, Maria Avramidou, Amritanshu Prasad, David Lindner View a PDF of the paper titled Evaluating and Understanding Scheming Propensity in LLM Agents, by Mia Hopman and 4 other authors View PDF HTML (experimental) Abstract:As frontier language models are increasingly deployed as autonomous agents pursuing complex, long-term objectives, there is increased risk of scheming: agents covertly pursuing misaligned goals. Prior work has focused on showing agents are capable of scheming, but their propensity to scheme in realistic scenarios remains underexplored. To understand when agents scheme, we decompose scheming incentives into agent factors and environmental factors. We develop realistic settings allowing us to systematically vary these factors, each with scheming opportunities for agents that pursue instrumentally convergent goals such as self-preservation, resource acquisition, and goal-guarding. We find only minimal instances of scheming despite high environmental incentives, and show this is unlikely due to evaluation awareness. While inserting adversarially-designed prompt snippets that encourage agency and goal-directedness into an agent's system prompt can induce high scheming rates, snippets used in real agent scaffolds rarely do. Surprisingly, in model organisms (Hubinger et al.,...