Trump orders US government to stop using Anthropic products

Trump orders US government to stop using Anthropic products

AI Tools & Products 8 min read Article

Summary

President Trump has ordered US government agencies to stop using Anthropic's AI software, Claude, following a dispute over Pentagon demands regarding its technology.

Why It Matters

This decision highlights the tensions between government oversight and the AI industry, particularly regarding national security and ethical use of AI technologies. The outcome could set a precedent for future government contracts with AI companies and impact the development of AI safety protocols.

Key Takeaways

  • Trump's order halts the use of Anthropic's AI software in federal agencies.
  • The dispute centers on Pentagon demands for unrestricted use of AI technology.
  • Anthropic's refusal to comply could jeopardize its $200 million contracts with the government.
  • Sam Altman's support for Anthropic raises concerns about government relations with the AI industry.
  • The situation underscores broader implications for AI ethics and national security.

Trump orders US government to stop using Anthropic productsupdatedPresident told government to stop using Claude, Anthropic’s AI software, after Dario Amodei, the company’s co-founder, balked at Pentagon demandsPresident Trump claimed that Anthropic was trying to “strong-arm” the Department of WarEVELYN HOCKSTEIN/REUTERSMark Sellman, Technology Correspondent | Louisa Clarence-Smith, US Business EditorFriday February 27 2026, 10.40pm, The TimesPresident Trump has ordered US government agencies to cease all use of Anthropic’s products after a row between the Pentagon and the artificial intelligence company over the use of its technology.In a post on Truth Social, he said that the “leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War”. There will be a six-month phase-out for agencies such as the US Department of War who use the company’s products, he added.“I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote.Anthropic’s AI software, Claude, is the only advanced model that is approved for US classified systems. But as it renegotiates a contract with the US Department of War, Anthropic has balked at the Pentagon’s demand that the government can make “any lawful use” of its AI.India’s prime minister, Narendra Modi, with Sam Altman and Dario Amodei — refusing to hold h...

Related Articles

Llms

Von Hammerstein’s Ghost: What a Prussian General’s Officer Typology Can Teach Us About AI Misalignment

Greetings all - I've posted mostly in r/claudecode and r/aigamedev a couple of times previously. Working with CC for personal projects re...

Reddit - Artificial Intelligence · 1 min ·
As more Americans adopt AI tools, fewer say they can trust the results | TechCrunch
Ai Safety

As more Americans adopt AI tools, fewer say they can trust the results | TechCrunch

AI adoption is rising in the U.S., but trust remains low, with most Americans concerned about transparency, regulation, and the technolog...

TechCrunch - AI · 6 min ·
Ai Safety

The state of AI safety in four fake graphs

submitted by /u/tekz [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
[2603.14267] DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and Synchronization
Machine Learning

[2603.14267] DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and Synchronization

Abstract page for arXiv paper 2603.14267: DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and ...

arXiv - AI · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime