Anthropic’s Claude Code gets ‘safer’ auto mode | The Verge
About this article
The feature is a middle-ground between cautious handholding and dangerous levels of autonomy.
AINewsAnthropicAnthropic’s Claude Code gets ‘safer’ auto modeThe feature is a middle-ground between cautious handholding and dangerous levels of autonomy.The feature is a middle-ground between cautious handholding and dangerous levels of autonomy.by Robert HartMar 25, 2026, 11:39 AM UTCLinkShareGiftImage: The VergeRobert Hart is a London-based reporter at The Verge covering all things AI and Senior Tarbell Fellow. Previously, he wrote about health, science and tech for Forbes.Anthropic has launched an “auto mode” for Claude Code, a new tool that lets AI make permissions-level decisions on users’ behalf. The company says the feature offers vibe coders a safer alternative between constant handholding or giving the model dangerous levels of autonomy.Claude Code is capable of acting independently on users’ behalf, a useful but risky feature as it can also do things users don’t want, like deleting files, sending out sensitive data, and executing malicious code or hidden instructions. Auto mode is designed to prevent this, flagging and blocking potentially risky actions before they run and offering the agent a chance to try again or ask a user to intervene.Right now, auto mode is only available as a research preview for Team plan users. Anthropic says access will expand to include Enterprise and API users in “the coming days.”Anthropic warns the tool is experimental and “doesn’t eliminate” risk entirely, recommending developers use it in “isolated environments.”Follow topics a...