Judge blocks Pentagon from labeling Anthropic AI a "supply chain risk" and halts Trump's ban on federal use
About this article
A judge has blocked the Trump administration from labeling Anthropic a supply chain risk and cutting off all federal work with the artificial intelligence firm, an early win for Anthropic in its bitter feud with the government.
A judge has blocked the Trump administration from labeling Anthropic a "supply chain risk" and cutting off all federal work with the artificial intelligence firm, an early win for Anthropic in its bitter feud with the government over AI guardrails. U.S. District Judge Rita Lin on Thursday ruled in favor of Anthropic, which sued the federal government earlier this month for taking actions that it called an "unprecedented and unlawful" attempt to punish the company for First Amendment-protected speech. Lin's ruling in the case prevents the government from enforcing its supply chain risk designation against Anthropic, a move that aimed to stop private government contractors from using the company's powerful Claude AI model. It also halts an order by President Trump for every federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology."In the ruling, she called the administration's moves "Orwellian" and said they could "cripple" the company. "At bottom, Anthropic has shown that these broad punitive measures were likely unlawful and that it is suffering irreparable harm from them," she wrote.The dispute revolves around Anthropic's push to bar the military from using Claude for domestic surveillance or to power fully autonomous weapons. The Defense Department has said it needs to maintain the authority to use AI for "all lawful purposes," and that there are already restrictions in place against those particular uses. The judge wrote that her ruling does not stop the ...