Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI | The Verge

Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI | The Verge

The Verge - AI 4 min read Article

Summary

Anthropic accuses DeepSeek and other Chinese firms of misusing its Claude AI model to enhance their own products through illicit distillation methods.

Why It Matters

This issue highlights the growing concerns over AI model misuse, especially by foreign entities. It raises critical questions about AI security, intellectual property, and the potential for authoritarian regimes to leverage advanced AI capabilities for surveillance and disinformation.

Key Takeaways

  • Anthropic claims DeepSeek and others misused Claude for AI training.
  • Illicit distillation could enable authoritarian governments to enhance surveillance and cyber operations.
  • The AI industry is urged to address the risks associated with model distillation.

AINewsAnthropicAnthropic accuses DeepSeek and other Chinese firms of using Claude to train their AIDeepSeek allegedly targeted Claude’s reasoning capabilities, while generating ‘censorship-safe alternatives to politically sensitive questions.’DeepSeek allegedly targeted Claude’s reasoning capabilities, while generating ‘censorship-safe alternatives to politically sensitive questions.’by Emma RothFeb 23, 2026, 8:22 PM UTCLinkShareGiftImage: Cath Virginia / The VergeEmma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.Anthropic claims DeepSeek and two other Chinese AI companies misused its Claude AI model in an attempt to improve their own products. In an announcement on Monday, Anthropic says the “industrial-scale campaigns” involved the creation of around 24,000 fraudulent accounts and more than 16 million exchanges with Claude, as reported earlier by The Wall Street Journal.The three companies — DeepSeek, MiniMax, and Moonshot — are accused of “distilling” Claude, or training a smaller AI model based on a more advanced one. Though Anthropic says that distillation is a “legitimate training method,” it adds that it can “also be used for illicit purposes,” including “to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”Anthropic adds that illicitly distilled models are “...

Related Articles

Llms

The Claude Code leak accidentally published the first complete blueprint for production AI agents. Here's what it tells us about where this is all going.

Most coverage of the Claude Code leak focuses on the drama or the hidden features. But the bigger story is that this is the first time we...

Reddit - Artificial Intelligence · 1 min ·
AI can push your Stream Deck buttons for you | The Verge
Llms

AI can push your Stream Deck buttons for you | The Verge

The Stream Deck 7.4 software update introduces MCP support, allowing AI assistants to find and activate Stream Deck actions on your behalf.

The Verge - AI · 4 min ·
Llms

[For Hire] Junior AI/ML Engineer | RAG · LLMs · FastAPI · Vector DBs | Remote

Posting this for a friend who isn't on Reddit. A recent graduate, entry level, no commercial production experience but spent the past yea...

Reddit - ML Jobs · 1 min ·
I Asked ChatGPT What WIRED’s Reviewers Recommend—Its Answers Were All Wrong | WIRED
Llms

I Asked ChatGPT What WIRED’s Reviewers Recommend—Its Answers Were All Wrong | WIRED

Want to know what our reviewers have actually tested and picked as the best TVs, headphones, and laptops? Ask ChatGPT, and it'll give you...

Wired - AI · 8 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime