New privacy tool helps detect when AI agents become double agents
April 7, 2026 by Scott Bureau New privacy tool helps detect when AI agents become double agents RIT cybersecurity research finds many AI tools lack safeguards for sensitive data Share on Facebook Share on X Share on LinkedIn Share on Reddit Share via Email Scott Hamilton/RIT RIT cybersecurity researchers have developed a tool that helps users see when autonomous AI systems collect, process, or share sensitive data—and whether those actions align with privacy policies. Assistant Professor Yidan Hu, left, and Ph.D. student Ye Zheng work in RIT's Global Cybersecurity Institute. Artificial intelligence (AI) agents are powerful tools that can make work and life easier. They can also introduce new privacy risks when given access to people’s Social Security numbers. Privacy experts at Rochester Institute of Technology are studying what happens to personal data when an AI agent starts doing tasks. In the end, RIT researchers aim to make AI agents more accountable. Yidan Hu, assistant professor of cybersecurity, and Ye Zheng, a computing and information sciences Ph.D. student, developed AudAgent, a tool that continuously monitors the data practices of AI agents. The tool then determines if AI is complying with its stated privacy policies and looks for ways to improve privacy control. The research comes as agentic AI is gaining traction and can be used in conjunction with generative AI systems like ChatGPT. Unlike a traditional chatbot, an AI agent can take actions on a user’s beh...