Pentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge Says | WIRED
About this article
During a hearing Tuesday, a district court judge questioned the Department of Defense’s motivations for labeling the Claude AI developer a supply-chain risk.
Save StorySave this storySave StorySave this storyThe US Department of Defense appears to be illegally punishing Anthropic for trying to restrict the use of its AI tools by the military, US district judge Rita Lin said during a court hearing on Tuesday.“It looks like an attempt to cripple Anthropic,” Lin said of the Pentagon designating the company a supply-chain risk. “It looks like [the department] is punishing Anthropic for trying to bring public scrutiny to this contract dispute, which of course would be a violation of the First Amendment.”Anthropic has filed two federal lawsuits alleging that the Trump administration’s decision to designate the company a security risk amounted to illegal retaliation. The government slapped the label on Anthropic after it pushed for limitations on how its AI could be used by the military. Tuesday’s hearing came in a case filed in San Francisco.Anthropic is seeking a temporary order to pause the designation. The relief, Anthropic hopes, would help convince some of the company’s skittish customers to stick with it just a bit longer. Lin can issue a pause only if she determines that Anthropic is likely to win the overall case. Her ruling on the injunction is expected in the next few days.The dispute has sparked a broader public conversation about how artificial intelligence is increasingly being used by the armed forces, and whether Silicon Valley companies should give deference to the government in determining how the technology they dev...