Pentagon ‘close to cutting ties’ with AI firm Anthropic over restrictions
Summary
The Pentagon is considering severing ties with AI firm Anthropic due to disagreements over restrictions on the use of its Claude AI tool, which Anthropic aims to protect from misuse in surveillance and weapon development.
Why It Matters
This situation highlights the tension between national security interests and ethical AI development. As AI technologies advance, ensuring they are not misused for harmful purposes is crucial, making this negotiation significant for both the military and AI ethics.
Key Takeaways
- The Pentagon is frustrated with Anthropic's restrictions on AI use.
- Anthropic aims to prevent its AI from being used for mass surveillance or autonomous weaponry.
- The situation underscores the ethical dilemmas in AI development for military applications.
AdvertisementUnited StatesWorldUnited States & CanadaPentagon ‘close to cutting ties’ with AI firm Anthropic amid frustration over restrictionsAnthropic wants to safeguard its Claude AI tool from being used for mass surveillance or advanced weapons development, Axios news reportedReading Time:2 minutesWhy you can trust SCMPBloombergPublished: 4:46am, 17 Feb 2026The Pentagon is close to cutting ties with Anthropic and may label the artificial intelligence company a supply chain risk after becoming frustrated with restrictions on how it can use the technology, Axios news website reported.Anthropic’s talks about extending a contract with the Pentagon are being held up over additional protections the artificial intelligence company wants to put on its Claude tool, a person familiar with the matter said.Anthropic wants to put safeguards in place to stop Claude from being used for mass surveillance of Americans or to develop weapons that can be deployed without a human involved, the person said, asking not to be identified because the negotiations are private.AdvertisementThe Pentagon wants to be able to use Claude as long as its deployment does not break the law. Axios reported on the disagreement earlier.AI’s use cases for developing weapons and gathering personal data are a burgeoning risk for powerful models. Anthropic, which positions itself as a more responsible AI company that aims to avoid catastrophic harms from the technology, built Claude Gov specifically for the US n...