Washington needs AI guardrails — now | Opinion
About this article
We need legislation that draws clear lines on what AI systems may and may not do on behalf of the United States government
Washington needs AI guardrails — now | OpinionWe need legislation that draws clear lines — on surveillance, on lethal autonomy, on what AI systems may and may not do on behalf of the United States governmentDavid RabjohnsMarch 27, 2026, 6:00 a.m. ETThe Pentagon reportedly threatened to blacklist AI company Anthropic for refusing to allow its technology for domestic surveillance and autonomous weapons.OpenAI subsequently signed a deal with the Pentagon that complies with existing laws, which some critics argue has loopholes.The author argues that Congress, not private companies, should create clear legislation to regulate government use of AI.Concerns are raised that current laws are outdated and may not prevent potential misuse of powerful AI technologies.The Pentagon threatened to blacklist Anthropic — the AI safety company behind Claude — for refusing to let the government use its technology for mass domestic surveillance and fully autonomous weapons. A few hours later, OpenAI stepped in and signed a deal. Washington declared victory.I am not a lawyer. I am not a general. I am a retired technology entrepreneur who spent 20 years building software systems for Fortune 1000 companies, and I know what it looks like when someone signs a contract with enough wiggle room to drive a Humvee through.OpenAI's deal promises to comply with existing laws. Sounds reasonable. Except that in 2013, those same laws were on the books when the NSA was quietly collecting the phone records of ...