Trump orders U.S. government to stop using Anthropic but gives Pentagon 6 months to phase it out

Trump orders U.S. government to stop using Anthropic but gives Pentagon 6 months to phase it out

AI Tools & Products 8 min read Article

Summary

Trump orders a six-month phase-out of Anthropic's technology from U.S. government use, citing national security concerns and the company's refusal to comply with military demands.

Why It Matters

This decision highlights the ongoing tensions between AI companies and the U.S. military regarding the ethical use of AI technologies. It raises questions about national security, corporate responsibility, and the future of AI in defense applications.

Key Takeaways

  • Trump's directive aims to sever ties with Anthropic over its refusal to allow military use of its AI technology.
  • The Pentagon is designating Anthropic as a supply-chain risk, impacting its ability to work with defense contractors.
  • The dispute reflects broader concerns in Silicon Valley about the ethical implications of AI in military applications.

President Donald Trump said Friday he will shut out Anthropic from the federal government after the AI company refused to compromise on how its technology could be used by the U.S. military.Recommended Video But he is also giving the Pentagon a six-month period to phase out Anthropic’s technology as it is one of the few AI companies allowed to operate in classified settings. In a Truth Social post, Trump called Anthropic “woke” and “leftwing,” claiming it is endangering troops and jeopardizing national security by not acceding to the Defense Department’s demands.  “Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology,” he wrote. “We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels.” Trump added that if Anthropic doesn’t obey, he will use “the Full Power of the Presidency to make them comply.” The San Francisco startup had refused to let users deploy its Claude models for mass domestic surveillance or autonomous weapons, while the Defense Department demanded the right to use the technology in all lawful cases. Defense Secretary Pete Hegseth threatened to revoke Anthropic’s $200 million contract with the U.S. military or label the company a supply-chain risk. On Friday, he said on X that he is designating the company as “Supply-Chain R...

Related Articles

Llms

Von Hammerstein’s Ghost: What a Prussian General’s Officer Typology Can Teach Us About AI Misalignment

Greetings all - I've posted mostly in r/claudecode and r/aigamedev a couple of times previously. Working with CC for personal projects re...

Reddit - Artificial Intelligence · 1 min ·
As more Americans adopt AI tools, fewer say they can trust the results | TechCrunch
Ai Safety

As more Americans adopt AI tools, fewer say they can trust the results | TechCrunch

AI adoption is rising in the U.S., but trust remains low, with most Americans concerned about transparency, regulation, and the technolog...

TechCrunch - AI · 6 min ·
Ai Safety

The state of AI safety in four fake graphs

submitted by /u/tekz [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
[2603.14267] DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and Synchronization
Machine Learning

[2603.14267] DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and Synchronization

Abstract page for arXiv paper 2603.14267: DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and ...

arXiv - AI · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime