[2503.03704] Memory Injection Attacks on LLM Agents via Query-Only Interaction
Summary
The paper discusses Memory Injection Attacks (MINJA) on LLM agents, demonstrating how attackers can manipulate agent memory through query-only interactions, leading to harmful outputs.
Why It Matters
As LLMs are increasingly integrated into various applications, understanding vulnerabilities like memory injection attacks is crucial for enhancing AI safety and security. This research highlights the potential risks associated with LLM agents and emphasizes the need for robust defenses against such attacks.
Key Takeaways
- Introduces Memory Injection Attacks (MINJA) targeting LLM agents.
- Attacks can be executed without direct memory modification, using only queries.
- Demonstrates the effectiveness of MINJA through extensive experiments.
- Highlights the risks of compromised memory in LLM applications.
- Calls for improved security measures to protect LLM agents from such vulnerabilities.
Computer Science > Machine Learning arXiv:2503.03704 (cs) [Submitted on 5 Mar 2025 (v1), last revised 12 Feb 2026 (this version, v5)] Title:Memory Injection Attacks on LLM Agents via Query-Only Interaction Authors:Shen Dong, Shaochen Xu, Pengfei He, Yige Li, Jiliang Tang, Tianming Liu, Hui Liu, Zhen Xiang View a PDF of the paper titled Memory Injection Attacks on LLM Agents via Query-Only Interaction, by Shen Dong and 7 other authors View PDF HTML (experimental) Abstract:Agents powered by large language models (LLMs) have demonstrated strong capabilities in a wide range of complex, real-world applications. However, LLM agents with a compromised memory bank may easily produce harmful outputs when the past records retrieved for demonstration are malicious. In this paper, we propose a novel Memory INJection Attack, MINJA, without assuming that the attacker can directly modify the memory bank of the agent. The attacker injects malicious records into the memory bank by only interacting with the agent via queries and output observations. These malicious records are designed to elicit a sequence of malicious reasoning steps corresponding to a different target query during the agent's execution of the victim user's query. Specifically, we introduce a sequence of bridging steps to link victim queries to the malicious reasoning steps. During the memory injection, we propose an indication prompt that guides the agent to autonomously generate similar bridging steps, with a progressive...