Pentagon ‘close to cutting ties’ with AI firm Anthropic over restrictions

Pentagon ‘close to cutting ties’ with AI firm Anthropic over restrictions

AI Tools & Products 2 min read Article

Summary

The Pentagon is considering severing ties with AI firm Anthropic due to disagreements over restrictions on the use of its Claude AI tool, which Anthropic aims to protect from misuse in surveillance and weapon development.

Why It Matters

This situation highlights the tension between national security interests and ethical AI development. As AI technologies advance, ensuring they are not misused for harmful purposes is crucial, making this negotiation significant for both the military and AI ethics.

Key Takeaways

  • The Pentagon is frustrated with Anthropic's restrictions on AI use.
  • Anthropic aims to prevent its AI from being used for mass surveillance or autonomous weaponry.
  • The situation underscores the ethical dilemmas in AI development for military applications.

AdvertisementUnited StatesWorldUnited States & CanadaPentagon ‘close to cutting ties’ with AI firm Anthropic amid frustration over restrictionsAnthropic wants to safeguard its Claude AI tool from being used for mass surveillance or advanced weapons development, Axios news reportedReading Time:2 minutesWhy you can trust SCMPBloombergPublished: 4:46am, 17 Feb 2026The Pentagon is close to cutting ties with Anthropic and may label the artificial intelligence company a supply chain risk after becoming frustrated with restrictions on how it can use the technology, Axios news website reported.Anthropic’s talks about extending a contract with the Pentagon are being held up over additional protections the artificial intelligence company wants to put on its Claude tool, a person familiar with the matter said.Anthropic wants to put safeguards in place to stop Claude from being used for mass surveillance of Americans or to develop weapons that can be deployed without a human involved, the person said, asking not to be identified because the negotiations are private.AdvertisementThe Pentagon wants to be able to use Claude as long as its deployment does not break the law. Axios reported on the disagreement earlier.AI’s use cases for developing weapons and gathering personal data are a burgeoning risk for powerful models. Anthropic, which positions itself as a more responsible AI company that aims to avoid catastrophic harms from the technology, built Claude Gov specifically for the US n...

Related Articles

Ai Safety

NHS staff resist using Palantir software. Staff reportedly cite ethics concerns, privacy worries, and doubt the platform adds much

submitted by /u/esporx [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

AI assistants are optimized to seem helpful. That is not the same thing as being helpful.

RLHF trains models on human feedback. Humans rate responses they like. And it turns out humans consistently rate confident, fluent, agree...

Reddit - Artificial Intelligence · 1 min ·
Computer Vision

House Democrat Questions Anthropic on AI Safety After Source Code Leak

Rep. Josh Gottheimer, who is generally tough on China, just sent a letter to Anthropic questioning their decision to reduce certain safet...

Reddit - Artificial Intelligence · 1 min ·
[2512.21106] Semantic Refinement with LLMs for Graph Representations
Llms

[2512.21106] Semantic Refinement with LLMs for Graph Representations

Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations

arXiv - Machine Learning · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime