Top AI firm alleges Chinese labs used 24K fake accounts to siphon US tech

Top AI firm alleges Chinese labs used 24K fake accounts to siphon US tech

AI Tools & Products 8 min read Article

Summary

Anthropic alleges that Chinese labs DeepSeek, Moonshot AI, and MiniMax used 24,000 fake accounts to extract capabilities from its Claude chatbot, raising concerns over AI security and U.S. export controls.

Why It Matters

This allegation highlights the vulnerabilities in U.S. AI technology and the potential for foreign entities to exploit these weaknesses. It raises critical questions about the effectiveness of current export controls and the implications for national security, particularly regarding the misuse of AI in military and surveillance applications.

Key Takeaways

  • Anthropic claims Chinese labs used 24,000 fake accounts for unauthorized AI distillation.
  • The distillation process could enable foreign labs to bypass U.S. safety measures in AI.
  • Current U.S. export controls may not adequately address the risks posed by AI distillation techniques.

close Video Inside the 'good, bad and unthinkable' of artificial intelligence Fox News anchor Bret Baier explores how the technology is changing how the world operates on 'Special Report.' NEWYou can now listen to Fox News articles! FIRST ON FOX: As Washington tightens export controls to preserve America’s artificial intelligence edge, top AI firm Anthropic alleges three China-based AI laboratories found another way to access advanced U.S. capabilities. The U.S. firm alleges DeepSeek, Moonshot AI and MiniMax used roughly 24,000 fraudulent accounts to generate more than 16 million exchanges with Anthropic's Claude chatbot in a coordinated "distillation" campaign designed to extract high-value model outputs, according to a report first obtained by Fox News Digital. The threat goes beyond ripping off U.S. companies, according to the report. Anthropic argues that models built through large-scale distillation are unlikely to retain the safety guardrails embedded in frontier U.S. systems. "Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance," Anthropic said.  The U.S. military reportedly used Anthropic’s AI tool Claude during the operation that captured Venezuelan leader Nicolás Maduro. (Kurt "CyberGuy" Knutsson) Anthropic says it identified the campaigns u...

Related Articles

My media ethics students express some surprisingly skeptical views about AI and journalism
Ai Safety

My media ethics students express some surprisingly skeptical views about AI and journalism

My colleagues and I are engaged in the convoluted, ever-shifting process of figuring out how to use artificial intelligence in journalism...

AI Tools & Products · 2 min ·
Ai Safety

Anthropic to sign deal with Australia on AI safety and economic data tracking

Anthropic is set to sign a deal with Australia focused on AI safety and tracking economic data.

AI Tools & Products · 1 min ·
[2512.02902] VLA Models Are More Generalizable Than You Think: Revisiting Physical and Spatial Modeling
Machine Learning

[2512.02902] VLA Models Are More Generalizable Than You Think: Revisiting Physical and Spatial Modeling

Abstract page for arXiv paper 2512.02902: VLA Models Are More Generalizable Than You Think: Revisiting Physical and Spatial Modeling

arXiv - AI · 3 min ·
[2510.02789] Align Your Query: Representation Alignment for Multimodality Medical Object Detection
Computer Vision

[2510.02789] Align Your Query: Representation Alignment for Multimodality Medical Object Detection

Abstract page for arXiv paper 2510.02789: Align Your Query: Representation Alignment for Multimodality Medical Object Detection

arXiv - AI · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime