Anthropic’s Break With the Pentagon Ignites AI Ethics Debate
Anthropic’s Break With the Pentagon Ignites AI Ethics Debate ANALYSIS: After rejecting Pentagon demands tied to autonomous weapons and surveillance, Anthropic’s stand has intensified debate over AI ethics — echoing recent Vatican warnings. Dario Amodei (l), co-founder and CEO of Anthropic, and secretary of war Pete Hegseth (photo: TechCrunch / Molly Riley / Register Composite / Wikimedia CC BY 2.0 / Public Domain) Jonah McKeown News March 3, 2026 Amid the explosion in recent years of artificial intelligence (AI), Catholics have consistently called for the inclusion of socially responsible safeguards, limits and ethical principles within the technology. Now, a leading AI developer that is trying to do that has found itself in a major dispute with the U.S. government — stoking a heated debate over the ethical and moral dimensions of AI development. Anthropic, a San Francisco startup, is the creator of Claude, a large language model (LLM)-based AI assistant that has already enjoyed wide adoption across many sectors of U.S. society, including thousands of businesses and schools. Founded in part by defectors from industry juggernaut OpenAI, Anthropic has positioned itself as the safe, responsible option in the AI ecosystem; its CEO, Dario Amodei, often gives interviews advocating for the development of “guardrails” to protect humanity from unchecked AI. The U.S. government, meanwhile, has since last year been exploring the use of AI in national defense. Four major U.S. AI compa...