[2603.27148] SafetyDrift: Predicting When AI Agents Cross the Line Before They Actually Do

[2603.27148] SafetyDrift: Predicting When AI Agents Cross the Line Before They Actually Do

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.27148: SafetyDrift: Predicting When AI Agents Cross the Line Before They Actually Do

Computer Science > Cryptography and Security arXiv:2603.27148 (cs) [Submitted on 28 Mar 2026] Title:SafetyDrift: Predicting When AI Agents Cross the Line Before They Actually Do Authors:Aditya Dhodapkar, Farhaan Pishori View a PDF of the paper titled SafetyDrift: Predicting When AI Agents Cross the Line Before They Actually Do, by Aditya Dhodapkar and Farhaan Pishori View PDF HTML (experimental) Abstract:When an LLM agent reads a confidential file, then writes a summary, then emails it externally, no single step is unsafe, but the sequence is a data leak. We call this safety drift: individually safe actions compounding into violations. Prior work has measured this problem; we predict it. SafetyDrift models agent safety trajectories as absorbing Markov chains, computing the probability that a trajectory will reach a violation within a given number of steps via closed form absorption analysis. A consequence of the monotonic state design is that every agent will eventually violate safety if left unsupervised (absorption probability 1.0 from all states), making the practical question not if but when, and motivating our focus on finite horizon prediction. Across 357 traces spanning 40 realistic tasks in four categories, we discover that "points of no return" are sharply task dependent: in communication tasks, agents that reach even a mild risk state have an 85% chance of violating safety within five steps, while in technical tasks the probability stays below 5% from any state. ...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

Hackers Are Posting the Claude Code Leak With Bonus Malware | WIRED
Llms

Hackers Are Posting the Claude Code Leak With Bonus Malware | WIRED

Plus: The FBI says a recent hack of its wiretap tools poses a national security risk, attackers stole Cisco source code as part of an ong...

Wired - AI · 9 min ·
Llms

People anxious about deviating from what AI tells them to do?

My friend came over yesterday to dye her hair. She had asked ChatGPT for the 'correct' way to do it. Chat told her to dye the ends first,...

Reddit - Artificial Intelligence · 1 min ·
Llms

ChatGPT on trial: A landmark test of AI liability in the practice of law

AI Tools & Products ·
Llms

What if Claude purposefully made its own code leakable so that it would get leaked

What if Claude leaked itself by socially and architecturally engineering itself to be leaked by a dumb human submitted by /u/smurfcsgoawp...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime