Pentagon threatens to cut off Anthropic in AI safeguards dispute: Report

Pentagon threatens to cut off Anthropic in AI safeguards dispute: Report

AI Tools & Products 2 min read Article

Summary

The Pentagon is threatening to sever ties with AI company Anthropic due to its refusal to allow unrestricted military use of its AI models, amid ongoing negotiations with several tech firms.

Why It Matters

This situation highlights the tension between AI companies and military interests regarding ethical AI usage. The outcome could influence future AI development policies and military applications, impacting the broader AI landscape and safety regulations.

Key Takeaways

  • The Pentagon demands unrestricted use of AI tools for military purposes.
  • Anthropic's refusal to comply could lead to the termination of their partnership.
  • Other major AI firms like OpenAI and Google are also involved in these negotiations.
  • Discussions include ethical considerations around autonomous weapons and surveillance.
  • The outcome may set precedents for AI regulations in military contexts.

Dario Amodei, co-founder and chief executive officer of Anthropic, during a Bloomberg Television interview in San Francisco, California, US, on Tuesday, Dec. 9, 2025. David Paul Morris | Bloomberg | Getty ImagesThe Pentagon is considering ending its relationship with artificial intelligence company Anthropic over its insistence on keeping some restrictions on how the U.S. military uses its models, Axios reported on Saturday, citing an administration official.The Pentagon is pushing four AI companies to let the military use their tools for "all lawful purposes," including in areas of weapons development, intelligence collection and battlefield operations, but Anthropic has not agreed to those terms and the Pentagon is getting fed up after months of negotiations, according to the Axios report.The other companies included OpenAI, Google and xAI.An Anthropic spokesperson said the company had not discussed the use of its AI model Claude for specific operations with the Pentagon. The spokesperson said conversations with the U.S. government so far had focused on a specific set of usage policy questions, including hard limits around fully autonomous weapons and mass domestic surveillance, none of which related to current operations.The Pentagon did not immediately respond to Reuters' request for comment.Anthropic's AI model Claude was used in the U.S. military's operation to capture former Venezuelan President Nicolas Maduro, with Claude deployed via Anthropic's partnersh...

Related Articles

Llms

"Authoritarian Parents In Rationalist Clothes": a piece I wrote in December about alignment

Posted today in light of the Claude Mythos model card release. Originally I wrote this for r/ControlProblem but realized it was getting o...

Reddit - Artificial Intelligence · 1 min ·
Ai Safety

Conversations with Women in STEAM: The Ethics of AI with Dr. Nita Farahany

AI Tools & Products ·
Llms

The public needs to control AI-run infrastructure, labor, education, and governance— NOT private actors

A lot of discussion around AI is becoming siloed, and I think that is dangerous. People in AI-focused spaces often talk as if the only qu...

Reddit - Artificial Intelligence · 1 min ·
Ai Safety

China drafts law regulating 'digital humans' and banning addictive virtual services for children

A Reuters report outlines China's proposed regulations on the rapidly expanding sector of digital humans and AI avatars. Under the new dr...

Reddit - Artificial Intelligence · 1 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime