[2602.11510] AgentLeak: A Full-Stack Benchmark for Privacy Leakage in Multi-Agent LLM Systems
About this article
Abstract page for arXiv paper 2602.11510: AgentLeak: A Full-Stack Benchmark for Privacy Leakage in Multi-Agent LLM Systems
Computer Science > Artificial Intelligence arXiv:2602.11510 (cs) [Submitted on 12 Feb 2026 (v1), last revised 27 Mar 2026 (this version, v2)] Title:AgentLeak: A Full-Stack Benchmark for Privacy Leakage in Multi-Agent LLM Systems Authors:Faouzi El Yagoubi, Godwin Badu-Marfo, Ranwa Al Mallah View a PDF of the paper titled AgentLeak: A Full-Stack Benchmark for Privacy Leakage in Multi-Agent LLM Systems, by Faouzi El Yagoubi and 2 other authors View PDF HTML (experimental) Abstract:Multi-agent Large Language Model (LLM) systems create privacy risks that current benchmarks cannot measure. When agents coordinate on tasks, sensitive data passes through inter-agent messages, shared memory, and tool arguments, all pathways that output-only audits never inspect. We introduce AgentLeak, to the best of our knowledge the first full-stack benchmark for privacy leakage covering internal channels. It spans 1,000 scenarios across healthcare, finance, legal, and corporate domains, paired with a 32-class attack taxonomy and a three-tier detection pipeline. A factorial evaluation crossing five production LLMs (GPT-4o, GPT-4o-mini, Claude 3.5 Sonnet, Mistral Large, and Llama 3.3 70B) with all 1,000 scenarios, yielding 4,979 validated execution traces, reveals that multi-agent configurations reduce per-channel output leakage (C1: 27.2\% vs 43.2\% in single-agent) but introduce unmonitored internal channels that raise total system exposure to 68.9\% (aggregated across C1, C2, C5). Internal chann...