Anthropic’s most dangerous AI model just fell into the wrong hands | The Verge
About this article
Anthropic’s powerful Mythos cybersecurity AI model has been accessed by a “small group of unauthorised users.”
AINewsTechAnthropic’s most dangerous AI model just fell into the wrong handsA Discord group has had access to the Mythos model for two weeks.A Discord group has had access to the Mythos model for two weeks.by Jess WeatherbedApr 22, 2026, 9:18 AM UTCLinkShareGiftImage: The VergeJess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.Anthropic’s Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a “small group of unauthorized users,” Bloomberg reports. An unnamed member of the group, identified only as “a third-party contractor for Anthropic,” told the publication that members of a private online forum got into Mythos via a mix of tactics, utilizing the contractor’s access and “commonly used internet sleuthing tools.”The Claude Mythos Preview is a new general-purpose model that’s capable of identifying and exploiting vulnerabilities “in every major operating system and every major web browser when directed by a user to do so,” according to Anthropic. Official access to the model is limited to a handful of companies through the Project Glasswing initiative, including Nvidia, Google, Amazon Web Services, Apple, and Microsoft. Governments are also eyeing the technology. Anthropic currently has no plans to release the model publicly due to concerns that it could be weaponized.“We’re invest...