AI Safety Meets the War Machine | WIRED
Summary
Anthropic faces scrutiny from the Pentagon over its refusal to allow its AI in military operations, risking a $200 million contract due to its safety-first stance.
Why It Matters
This situation highlights the tension between AI safety and military applications, raising critical questions about ethical AI use in warfare. As AI technology advances, the implications for safety and regulation become increasingly significant, affecting both companies and national security.
Key Takeaways
- Anthropic's refusal to participate in military operations could jeopardize a major Pentagon contract.
- The Pentagon's stance emphasizes the need for AI companies to align with military objectives.
- The debate over AI safety versus military use raises ethical concerns about the future of AI technology.
Save StorySave this storySave StorySave this storyWhen Anthropic last year became the first major AI company cleared by the US government for classified use—including military applications—the news didn’t make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a “supply chain risk,” a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work. In a statement to WIRED, chief Pentagon spokesperson Sean Parnell confirmed that Anthropic was in the hot seat. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” he said. This is a message to other companies as well: OpenAI, xAI and Google, which currently have Department of Defense contracts for unclassified work, are jumping through the requisite hoops to get their own high clearances.There’s plenty to unpack here. For one thing, there’s a question of whether Anthropic is being punished for complaining about the fact that its AI model Claude was used as part of the r...