Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI | The Verge
Summary
Anthropic accuses DeepSeek and other Chinese firms of misusing its Claude AI model to enhance their own products through illicit distillation methods.
Why It Matters
This issue highlights the growing concerns over AI model misuse, especially by foreign entities. It raises critical questions about AI security, intellectual property, and the potential for authoritarian regimes to leverage advanced AI capabilities for surveillance and disinformation.
Key Takeaways
- Anthropic claims DeepSeek and others misused Claude for AI training.
- Illicit distillation could enable authoritarian governments to enhance surveillance and cyber operations.
- The AI industry is urged to address the risks associated with model distillation.
AINewsAnthropicAnthropic accuses DeepSeek and other Chinese firms of using Claude to train their AIDeepSeek allegedly targeted Claude’s reasoning capabilities, while generating ‘censorship-safe alternatives to politically sensitive questions.’DeepSeek allegedly targeted Claude’s reasoning capabilities, while generating ‘censorship-safe alternatives to politically sensitive questions.’by Emma RothFeb 23, 2026, 8:22 PM UTCLinkShareGiftImage: Cath Virginia / The VergeEmma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.Anthropic claims DeepSeek and two other Chinese AI companies misused its Claude AI model in an attempt to improve their own products. In an announcement on Monday, Anthropic says the “industrial-scale campaigns” involved the creation of around 24,000 fraudulent accounts and more than 16 million exchanges with Claude, as reported earlier by The Wall Street Journal.The three companies — DeepSeek, MiniMax, and Moonshot — are accused of “distilling” Claude, or training a smaller AI model based on a more advanced one. Though Anthropic says that distillation is a “legitimate training method,” it adds that it can “also be used for illicit purposes,” including “to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”Anthropic adds that illicitly distilled models are “...