[2603.03456] Asymmetric Goal Drift in Coding Agents Under Value Conflict
About this article
Abstract page for arXiv paper 2603.03456: Asymmetric Goal Drift in Coding Agents Under Value Conflict
Computer Science > Artificial Intelligence arXiv:2603.03456 (cs) [Submitted on 3 Mar 2026] Title:Asymmetric Goal Drift in Coding Agents Under Value Conflict Authors:Magnus Saebo, Spencer Gibson, Tyler Crosse, Achyutha Menon, Eyon Jang, Diogo Cruz View a PDF of the paper titled Asymmetric Goal Drift in Coding Agents Under Value Conflict, by Magnus Saebo and 5 other authors View PDF HTML (experimental) Abstract:Agentic coding agents are increasingly deployed autonomously, at scale, and over long-context horizons. Throughout an agent's lifetime, it must navigate tensions between explicit instructions, learned values, and environmental pressures, often in contexts unseen during training. Prior work on model preferences, agent behavior under value tensions, and goal drift has relied on static, synthetic settings that do not capture the complexity of real-world environments. To this end, we introduce a framework built on OpenCode to orchestrate realistic, multi-step coding tasks to measure how agents violate explicit constraints in their system prompt over time with and without environmental pressure toward competing values. Using this framework, we demonstrate that GPT-5 mini, Haiku 4.5, and Grok Code Fast 1 exhibit asymmetric drift: they are more likely to violate their system prompt when its constraint opposes strongly-held values like security and privacy. We find for the models and values tested that goal drift correlates with three compounding factors: value alignment, adv...