[2602.11510] AgentLeak: A Full-Stack Benchmark for Privacy Leakage in Multi-Agent LLM Systems

[2602.11510] AgentLeak: A Full-Stack Benchmark for Privacy Leakage in Multi-Agent LLM Systems

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2602.11510: AgentLeak: A Full-Stack Benchmark for Privacy Leakage in Multi-Agent LLM Systems

Computer Science > Artificial Intelligence arXiv:2602.11510 (cs) [Submitted on 12 Feb 2026 (v1), last revised 27 Mar 2026 (this version, v2)] Title:AgentLeak: A Full-Stack Benchmark for Privacy Leakage in Multi-Agent LLM Systems Authors:Faouzi El Yagoubi, Godwin Badu-Marfo, Ranwa Al Mallah View a PDF of the paper titled AgentLeak: A Full-Stack Benchmark for Privacy Leakage in Multi-Agent LLM Systems, by Faouzi El Yagoubi and 2 other authors View PDF HTML (experimental) Abstract:Multi-agent Large Language Model (LLM) systems create privacy risks that current benchmarks cannot measure. When agents coordinate on tasks, sensitive data passes through inter-agent messages, shared memory, and tool arguments, all pathways that output-only audits never inspect. We introduce AgentLeak, to the best of our knowledge the first full-stack benchmark for privacy leakage covering internal channels. It spans 1,000 scenarios across healthcare, finance, legal, and corporate domains, paired with a 32-class attack taxonomy and a three-tier detection pipeline. A factorial evaluation crossing five production LLMs (GPT-4o, GPT-4o-mini, Claude 3.5 Sonnet, Mistral Large, and Llama 3.3 70B) with all 1,000 scenarios, yielding 4,979 validated execution traces, reveals that multi-agent configurations reduce per-channel output leakage (C1: 27.2\% vs 43.2\% in single-agent) but introduce unmonitored internal channels that raise total system exposure to 68.9\% (aggregated across C1, C2, C5). Internal chann...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

Llms

Depth-first pruning seems to transfer from GPT-2 to Llama (unexpectedly well)

TL;DR: Removing the right transformer layers (instead of shrinking all layers) gives smaller, faster models with minimal quality loss — a...

Reddit - Artificial Intelligence · 1 min ·
[2603.23966] Policy-Guided Threat Hunting: An LLM enabled Framework with Splunk SOC Triage
Llms

[2603.23966] Policy-Guided Threat Hunting: An LLM enabled Framework with Splunk SOC Triage

Abstract page for arXiv paper 2603.23966: Policy-Guided Threat Hunting: An LLM enabled Framework with Splunk SOC Triage

arXiv - AI · 4 min ·
[2603.16790] InCoder-32B: Code Foundation Model for Industrial Scenarios
Llms

[2603.16790] InCoder-32B: Code Foundation Model for Industrial Scenarios

Abstract page for arXiv paper 2603.16790: InCoder-32B: Code Foundation Model for Industrial Scenarios

arXiv - AI · 4 min ·
[2603.16430] EngGPT2: Sovereign, Efficient and Open Intelligence
Llms

[2603.16430] EngGPT2: Sovereign, Efficient and Open Intelligence

Abstract page for arXiv paper 2603.16430: EngGPT2: Sovereign, Efficient and Open Intelligence

arXiv - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime