Top AI firm alleges Chinese labs used 24K fake accounts to siphon US tech
Summary
Anthropic alleges that Chinese labs DeepSeek, Moonshot AI, and MiniMax used 24,000 fake accounts to extract capabilities from its Claude chatbot, raising concerns over AI security and U.S. export controls.
Why It Matters
This allegation highlights the vulnerabilities in U.S. AI technology and the potential for foreign entities to exploit these weaknesses. It raises critical questions about the effectiveness of current export controls and the implications for national security, particularly regarding the misuse of AI in military and surveillance applications.
Key Takeaways
- Anthropic claims Chinese labs used 24,000 fake accounts for unauthorized AI distillation.
- The distillation process could enable foreign labs to bypass U.S. safety measures in AI.
- Current U.S. export controls may not adequately address the risks posed by AI distillation techniques.
close Video Inside the 'good, bad and unthinkable' of artificial intelligence Fox News anchor Bret Baier explores how the technology is changing how the world operates on 'Special Report.' NEWYou can now listen to Fox News articles! FIRST ON FOX: As Washington tightens export controls to preserve America’s artificial intelligence edge, top AI firm Anthropic alleges three China-based AI laboratories found another way to access advanced U.S. capabilities. The U.S. firm alleges DeepSeek, Moonshot AI and MiniMax used roughly 24,000 fraudulent accounts to generate more than 16 million exchanges with Anthropic's Claude chatbot in a coordinated "distillation" campaign designed to extract high-value model outputs, according to a report first obtained by Fox News Digital. The threat goes beyond ripping off U.S. companies, according to the report. Anthropic argues that models built through large-scale distillation are unlikely to retain the safety guardrails embedded in frontier U.S. systems. "Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance," Anthropic said. The U.S. military reportedly used Anthropic’s AI tool Claude during the operation that captured Venezuelan leader Nicolás Maduro. (Kurt "CyberGuy" Knutsson) Anthropic says it identified the campaigns u...