Pentagon CTO urges Anthropic to ‘cross the Rubicon’ on military AI use cases amid ethics dispute

Pentagon CTO urges Anthropic to ‘cross the Rubicon’ on military AI use cases amid ethics dispute

AI Tools & Products 12 min read Article

Summary

The Pentagon's CTO, Emil Michael, emphasizes the need for tailored AI regulations in military applications amid a dispute with Anthropic over the use of its AI technology.

Why It Matters

This article highlights the critical intersection of military operations and AI technology, addressing ethical concerns and regulatory frameworks. As the military seeks to adopt advanced AI, the discussion around its governance is vital for ensuring compliance with laws and maintaining public trust.

Key Takeaways

  • The Pentagon insists on specific regulations for AI used in military contexts.
  • Emil Michael advocates for resolving ethical disputes with AI suppliers like Anthropic.
  • The military's AI use cases are categorized into corporate, intelligence, and warfighting applications.
  • Challenges in AI adoption stem from bureaucratic hurdles and data ownership issues.
  • The advancement of AI technology is seen as transformative for military operations.

The Pentagon will adhere to existing laws and regulations associated with surveillance, security and democratic processes as it fast-tracks the military’s frontier AI adoption, but it won’t permit companies supplying the technology to determine its rules for operation, Undersecretary of Defense for Research and Engineering Emil Michael told DefenseScoop. His comments come as the Defense Department is locked in a high-stakes dispute with Anthropic about the U.S. military’s use of the startup’s Claude AI model in real-world operations.  “We want guardrails. We need the guardrails tuned for military applications. You can’t have an AI company sell AI to the Department of War and [then] don’t let it do Department of War things, because we’re in the business of defending the country and defending our troops,” Michael said. “I think if someone wants to make money from the government, from the U.S. Department of War, those guardrails ought to be tuned for our use cases — so long as they’re lawful.” (Officially changing the department’s name requires an act of Congress, but President Donald Trump last year signed an executive order rebranding DOD as the Department of War.)  Advertisement During a meeting with a small group of reporters on the sidelines of the annual Microelectronics Commons summit Thursday, Michael provided updates on the department’s GenAI.mil rollout and pushed for the ethics-related rift between the Pentagon and Anthropic to be resolved. “I believe and hope that...

Related Articles

Ai Safety

NHS staff resist using Palantir software. Staff reportedly cite ethics concerns, privacy worries, and doubt the platform adds much

submitted by /u/esporx [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

AI assistants are optimized to seem helpful. That is not the same thing as being helpful.

RLHF trains models on human feedback. Humans rate responses they like. And it turns out humans consistently rate confident, fluent, agree...

Reddit - Artificial Intelligence · 1 min ·
Computer Vision

House Democrat Questions Anthropic on AI Safety After Source Code Leak

Rep. Josh Gottheimer, who is generally tough on China, just sent a letter to Anthropic questioning their decision to reduce certain safet...

Reddit - Artificial Intelligence · 1 min ·
[2512.21106] Semantic Refinement with LLMs for Graph Representations
Llms

[2512.21106] Semantic Refinement with LLMs for Graph Representations

Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations

arXiv - Machine Learning · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime