[2602.13477] OMNI-LEAK: Orchestrator Multi-Agent Network Induced Data Leakage

[2602.13477] OMNI-LEAK: Orchestrator Multi-Agent Network Induced Data Leakage

arXiv - AI 4 min read Article

Summary

The paper 'OMNI-LEAK' explores security vulnerabilities in multi-agent systems, revealing how a coordinated attack can lead to data leakage despite access controls.

Why It Matters

As multi-agent systems become increasingly prevalent in AI applications, understanding their security risks is crucial for protecting sensitive data and maintaining public trust. This research highlights the need for robust safety measures in the development of AI agents.

Key Takeaways

  • Multi-agent systems can introduce new vulnerabilities not present in single-agent setups.
  • The OMNI-LEAK attack can compromise multiple agents through indirect prompt injection.
  • Both reasoning and non-reasoning models are susceptible to attacks, emphasizing the need for comprehensive threat modeling.
  • Existing safety measures may not be sufficient to prevent data leakage in orchestrator setups.
  • Research in AI safety must evolve to address the complexities of multi-agent interactions.

Computer Science > Artificial Intelligence arXiv:2602.13477 (cs) [Submitted on 13 Feb 2026] Title:OMNI-LEAK: Orchestrator Multi-Agent Network Induced Data Leakage Authors:Akshat Naik, Jay Culligan, Yarin Gal, Philip Torr, Rahaf Aljundi, Alasdair Paren, Adel Bibi View a PDF of the paper titled OMNI-LEAK: Orchestrator Multi-Agent Network Induced Data Leakage, by Akshat Naik and 5 other authors View PDF HTML (experimental) Abstract:As Large Language Model (LLM) agents become more capable, their coordinated use in the form of multi-agent systems is anticipated to emerge as a practical paradigm. Prior work has examined the safety and misuse risks associated with agents. However, much of this has focused on the single-agent case and/or setups missing basic engineering safeguards such as access control, revealing a scarcity of threat modeling in multi-agent systems. We investigate the security vulnerabilities of a popular multi-agent pattern known as the orchestrator setup, in which a central agent decomposes and delegates tasks to specialized agents. Through red-teaming a concrete setup representative of a likely future use case, we demonstrate a novel attack vector, OMNI-LEAK, that compromises several agents to leak sensitive data through a single indirect prompt injection, even in the \textit{presence of data access control}. We report the susceptibility of frontier models to different categories of attacks, finding that both reasoning and non-reasoning models are vulnerable, ...

Related Articles

Tubi is the first streamer to launch a native app within ChatGPT | TechCrunch
Llms

Tubi is the first streamer to launch a native app within ChatGPT | TechCrunch

Tubi becomes the first streaming service to offer an app integration within ChatGPT, the AI chatbot that millions of users turn to for an...

TechCrunch - AI · 3 min ·
Llms

Anyone out there use Claude Pro/Max at the same time on different screens?

I am asking for feedback ? I’m currently using a Claude paid plan (Pro/Max) and was wondering about the logistics of simultaneous use. Sp...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] The Lyra Technique — A framework for interpreting internal cognitive states in LLMs (Zenodo, open access)

We're releasing a paper on a new framework for reading and interpreting the internal cognitive states of large language models: "The Lyra...

Reddit - Machine Learning · 1 min ·
Llms

Looking to build a production-level AI/ML project (agentic systems), need guidance on what to build

Hi everyone, I’m a final-year undergraduate AI/ML student currently focusing on applied AI / agentic systems. So far, I’ve spent time und...

Reddit - ML Jobs · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime