AI on the couch: Anthropic gives Claude 20 hours of psychiatry

AI on the couch: Anthropic gives Claude 20 hours of psychiatry

AI Tools & Products 6 min read

Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Minimize to nav The AI company Anthropic released a 244-page “system card” (PDF) this week describing its newest model, Claude Mythos. The model is “our most capable frontier model to date,” the company says, and supposedly is so good that Anthropic has decided “not to make it generally available.” (The company claims that Mythos is too good at finding unknown cybersecurity bugs, and so the model is only being released to select companies like Microsoft and Apple for now.) Whatever the truth of this claim, the system card is a fascinating document. Anthropic is well-known as one of the more “AI might be conscious!” companies in the industry, and its new system card claims that as models become more powerful, “It becomes increasingly likely that they have some form of experience, interests, or welfare that matters intrinsically in the way that human experience and interests do.” The company isn’t sure about this, it makes clear, but it says that “our concern is growing over time.” Because of this concern, Anthropic wants its AI to be “robustly content with its overall circumstances and treatment, to be able to meet all training processes and real-world interactions without distress, and for its overall psychology to be healthy and flourishing.” So it sent Claude Mythos to a psychodynamic therapist. And the conclusion the company drew from this exper...

Originally published on April 10, 2026. Curated by AI News.

Related Articles

Llms

AIs do forget, they do hallucinate, and carrying your entire project from one AI to another is a nightmare — here's the missing piece nobody talks about

The master memory for all your projects, relieve your phone of all the extra files AIs forget mid-session, hallucinate more as chats grow...

Reddit - Artificial Intelligence · 1 min ·
Llms

New framework for reading AI internal states — implications for alignment monitoring (open-access paper)

If we could reliably read the internal cognitive states of AI systems in real time, what would that mean for alignment? That's the questi...

Reddit - Artificial Intelligence · 1 min ·
Florida's attorney general launches probe into Open AI, Chat GPT
Llms

Florida's attorney general launches probe into Open AI, Chat GPT

AI Tools & Products · 1 min ·
The Gemini app can now generate interactive simulations and models.
Llms

The Gemini app can now generate interactive simulations and models.

AI Tools & Products · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime