Your AI agents are moving sensitive data. Do you know where?
About this article
Bonfy.AI CEO Gidi Cohen on why AI agent security starts at the data layer, and what CISOs must do before agents expose sensitive data.
Mirko Zorz, Director of Content, Help Net Security Sponsored March 23, 2026 Share Your AI agents are moving sensitive data. Do you know where? In this Help Net Security interview, Gidi Cohen, CEO at Bonfy.AI, addresses what he sees as the most pressing gap in AI agent security: data-layer risk. While the industry focuses on prompt injection and model behavior, Cohen argues the deeper threat is autonomous AI agents operating across systems with no visibility into what data they access, combine, or expose. He explains how Bonfy.AI approaches this through three areas: controlling what data agents can access for grounding, monitoring content as it moves through tool calls and MCP servers, and letting agents query Bonfy in real time to check whether an action is safe before they take it. The conversation covers threat modeling, anomaly detection, multi-agent delegation, model versioning, and practical advice for CISOs navigating pressure to deploy AI at scale. When we talk about “AI agent security,” most people immediately think about prompt injection or jailbreaks. What’s the threat vector that keeps you up at night that almost nobody in the industry is preparing for? The threat that keeps us up at night isn’t another clever jailbreak, it’s autonomous data misuse by AI agents operating across systems the enterprise doesn’t fully see, understand, or govern yet. Most of the conversation today is still “LLM-centric,” prompt injection, jailbreaks, model behavior. But in large orga...