New privacy tool helps detect when AI agents become double agents

New privacy tool helps detect when AI agents become double agents

AI Tools & Products 5 min read

April 7, 2026 by Scott Bureau New privacy tool helps detect when AI agents become double agents RIT cybersecurity research finds many AI tools lack safeguards for sensitive data Share on Facebook Share on X Share on LinkedIn Share on Reddit Share via Email ‌ Scott Hamilton/RIT RIT cybersecurity researchers have developed a tool that helps users see when autonomous AI systems collect, process, or share sensitive data—and whether those actions align with privacy policies. Assistant Professor Yidan Hu, left, and Ph.D. student Ye Zheng work in RIT's Global Cybersecurity Institute. Artificial intelligence (AI) agents are powerful tools that can make work and life easier. They can also introduce new privacy risks when given access to people’s Social Security numbers. Privacy experts at Rochester Institute of Technology are studying what happens to personal data when an AI agent starts doing tasks. In the end, RIT researchers aim to make AI agents more accountable. Yidan Hu, assistant professor of cybersecurity, and Ye Zheng, a computing and information sciences Ph.D. student, developed AudAgent, a tool that continuously monitors the data practices of AI agents. The tool then determines if AI is complying with its stated privacy policies and looks for ways to improve privacy control. The research comes as agentic AI is gaining traction and can be used in conjunction with generative AI systems like ChatGPT. Unlike a traditional chatbot, an AI agent can take actions on a user’s beh...

Originally published on April 08, 2026. Curated by AI News.

Related Articles

Boston's CIO wants the public — and other city governments — to use his open-source agentic AI tools
Ai Agents

Boston's CIO wants the public — and other city governments — to use his open-source agentic AI tools

AI Tools & Products · 7 min ·
Machine Learning

[D] Your Agent, Their Asset: Real-world safety evaluation of OpenClaw agents (CIK poisoning raises attack success to ~64–74%)

Paper: https://arxiv.org/abs/2604.04759 This paper presents a real-world safety evaluation of OpenClaw, a personal AI agent with access t...

Reddit - Machine Learning · 1 min ·
Ai Agents

Microsoft’s GitHub Sees Booming Traffic—and Outages—as AI Agents Flood Platform

AI Tools & Products ·
Machine Learning

We have an AI agent fragmentation problem

Every AI agent works fine on its own — but the moment you try to use more than one, everything falls apart. Different runtimes. Different...

Reddit - Artificial Intelligence · 1 min ·
More in Ai Agents: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime