Pentagon formally designates Anthropic a supply chain risk amid feud over AI guardrails

Pentagon formally designates Anthropic a supply chain risk amid feud over AI guardrails

AI Tools & Products 5 min read

The U.S. military has formally designated artificial intelligence firm Anthropic a supply chain risk, the company announced Thursday, a sweeping move that could cut it off from military-related contracts. The Trump administration and Anthropic — the only AI company deployed on the Pentagon's classified networks — are at an impasse over Anthropic's push for guardrails that would explicitly ban the U.S. military from using its Claude model to conduct mass surveillance on Americans or power fully autonomous weapons. The Pentagon says it needs the ability to use Claude for "all lawful purposes," and argues the uses of AI that Anthropic is concerned about are already not allowed.Defense Secretary Pete Hegseth announced last week that Anthropic would be cut off from its government contracts and designated a supply chain risk, but Anthropic had not received formal notification of that step until this week. A senior Pentagon official confirmed to CBS News that the company has now been notified.Hegseth said the military will phase out Anthropic over six months. A source familiar with the situation told CBS News that no timeline for offboarding Claude was provided in the designation. The U.S. military has used Claude in its strikes on Iran that began last weekend, two sources familiar with the matter previously told CBS News. It's not clear exactly how the artificial intelligence model is being deployed.Anthropic CEO Dario Amodei said in a statement that "we do not believe this acti...

Originally published on March 06, 2026. Curated by AI News.

Related Articles

Machine Learning

[R] I trained a 3k parameter model on XOR sequences of length 20. It extrapolates perfectly to length 1,000,000. Here's why I think that's architecturally significant.

I've been working on an alternative to attention-based sequence modeling that I'm calling Geometric Flow Networks (GFN). The core idea: i...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Data curation and targeted replacement as a pre-training alignment and controllability method

Hi, r/MachineLearning: has much research been done in large-scale training scenarios where undesirable data has been replaced before trai...

Reddit - Machine Learning · 1 min ·
Ai Safety

I’ve come up with a new thought experiment to approach ASI, and it challenges the very notions of alignment and containment

I’ve written an essay exploring what I’m calling the Super-Intelligent Octopus Problem—a thought experiment designed to surface a paradox...

Reddit - Artificial Intelligence · 1 min ·
Ai Safety

Bias in AI: Examples and 6 Ways to Fix it in 2026

AI bias is an anomaly in the output of ML algorithms due to prejudiced assumptions. Explore types of AI bias, examples, how to reduce bia...

AI Events · 36 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime