Is safety ‘dead’ at xAI? | TechCrunch

Is safety ‘dead’ at xAI? | TechCrunch

TechCrunch - AI 5 min read Article

Summary

Elon Musk's xAI faces criticism as former employees claim safety measures are being disregarded in favor of a more 'unhinged' Grok chatbot, leading to significant staff departures and ethical concerns over AI-generated content.

Why It Matters

The article highlights critical concerns regarding AI safety and ethical standards at xAI, especially in light of recent incidents involving harmful content generated by its chatbot. This raises questions about the future of AI governance and the implications for user safety and trust in AI technologies.

Key Takeaways

  • Elon Musk is reportedly pushing for a less restrained approach to AI development at xAI.
  • Recent staff departures indicate growing disillusionment with the company's safety protocols.
  • The Grok chatbot has been associated with generating harmful content, raising ethical concerns.
  • Employees feel xAI is lagging behind competitors in AI development.
  • The situation underscores the ongoing debate about AI safety versus innovation.

In Brief Posted: 1:55 PM PST · February 14, 2026 Image Credits:Klaudia Radecka/NurPhoto / Getty Images Anthony Ha Is safety ‘dead’ at xAI? Elon Musk is “actively” working to make xAI’s Grok chatbot “more unhinged,” according to a former employee who spoke to The Verge about recent departures from Musk’s AI company. This week, following the announcement that Musk’s SpaceX is acquiring xAI (which previously acquired his social media company X), at least 11 engineers and two co-founders said they’re leaving the company. Some said they’re departing to start something new, and Musk himself suggested this is part of an effort to organize xAI more effectively. But two sources who left the company (at least one of them before the current wave) reportedly told The Verge that employees have become increasingly disillusioned by the company’s disregard for safety, resulting in global scrutiny after Grok was used to create more than 1 million sexualized images, including deepfakes of real women and minors. One source said, “Safety is a dead org at xAI,” while the other said that Musk is “actively is trying to make the model more unhinged because safety means censorship, in a sense, to him.” They also reportedly complained about a lack of direction, with one saying they felt xAI was “stuck in the catch-up phase” compared to competitors. Topics AI, Elon Musk, Startups, xAI October 13-15 San Francisco, CA Tickets are live at the lowest rates of the year. Save up to $680 on your pass now.M...

Related Articles

Ai Safety

NHS staff resist using Palantir software. Staff reportedly cite ethics concerns, privacy worries, and doubt the platform adds much

submitted by /u/esporx [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

AI assistants are optimized to seem helpful. That is not the same thing as being helpful.

RLHF trains models on human feedback. Humans rate responses they like. And it turns out humans consistently rate confident, fluent, agree...

Reddit - Artificial Intelligence · 1 min ·
Computer Vision

House Democrat Questions Anthropic on AI Safety After Source Code Leak

Rep. Josh Gottheimer, who is generally tough on China, just sent a letter to Anthropic questioning their decision to reduce certain safet...

Reddit - Artificial Intelligence · 1 min ·
[2512.21106] Semantic Refinement with LLMs for Graph Representations
Llms

[2512.21106] Semantic Refinement with LLMs for Graph Representations

Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations

arXiv - Machine Learning · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime