Anthropic is clashing with the Pentagon over AI use. Here's what each side wants

Anthropic is clashing with the Pentagon over AI use. Here's what each side wants

AI Tools & Products 4 min read Article

Summary

Anthropic is in negotiations with the Pentagon over the use of its AI models, seeking assurances against their application in autonomous weapons and mass surveillance, while the DOD desires broader usage rights.

Why It Matters

This conflict highlights the tension between AI innovation and ethical considerations in defense applications. As AI becomes integral to national security, the outcomes of these negotiations could set precedents for future AI regulations and military contracts, impacting the broader AI landscape.

Key Takeaways

  • Anthropic is negotiating terms with the Pentagon regarding the use of its AI models.
  • The company seeks to prevent its technology from being used in autonomous weapons or mass surveillance.
  • The DOD insists on using AI models for all lawful purposes, creating a potential impasse.
  • If negotiations fail, Anthropic could face significant repercussions, including being labeled a supply chain risk.
  • The outcome may influence future AI regulations and relationships between tech companies and government agencies.

Key PointsAnthropic's relationship with the Department of Defense is "under review" as the two sides negotiate over how the company's AI models can be used.The startup wants assurance that its models will not be used for autonomous weapons or mass surveillance, according to a report from Axios. The DOD wants to use Anthropic's models "for all lawful use cases" without limitation, according to Emil Michael, the under secretary of war for research and engineering.FILE PHOTO: The Pentagon is seen from the air in Washington, U.S., March 3, 2022.Joshua Roberts | ReutersAnthropic is at odds with the Department of Defense over how its artificial intelligence models should be used, and its work with the agency is "under review," a Pentagon spokesperson told CNBC. The 5-year-old startup was awarded a $200 million contract with the DOD last year. As of February, Anthropic is the only AI company that has deployed its models on the agency's classified networks and provided customized models to national security customers. But negotiations about "going forward" terms of use have hit a snag, Emil Michael, the undersecretary of defense for research and engineering, said at a summit in Florida on Tuesday.Anthropic wants assurance that its models will not be used for autonomous weapons or to "spy on Americans en masse," according to a report from Axios. The DOD, by contrast, wants to use Anthropic's models "for all lawful use cases" without limitation. "If any one company doesn't want t...

Related Articles

Ai Safety

NHS staff resist using Palantir software. Staff reportedly cite ethics concerns, privacy worries, and doubt the platform adds much

submitted by /u/esporx [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

AI assistants are optimized to seem helpful. That is not the same thing as being helpful.

RLHF trains models on human feedback. Humans rate responses they like. And it turns out humans consistently rate confident, fluent, agree...

Reddit - Artificial Intelligence · 1 min ·
Computer Vision

House Democrat Questions Anthropic on AI Safety After Source Code Leak

Rep. Josh Gottheimer, who is generally tough on China, just sent a letter to Anthropic questioning their decision to reduce certain safet...

Reddit - Artificial Intelligence · 1 min ·
[2512.21106] Semantic Refinement with LLMs for Graph Representations
Llms

[2512.21106] Semantic Refinement with LLMs for Graph Representations

Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations

arXiv - Machine Learning · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime