[2503.03704] Memory Injection Attacks on LLM Agents via Query-Only Interaction

[2503.03704] Memory Injection Attacks on LLM Agents via Query-Only Interaction

arXiv - Machine Learning 4 min read Article

Summary

The paper discusses Memory Injection Attacks (MINJA) on LLM agents, demonstrating how attackers can manipulate agent memory through query-only interactions, leading to harmful outputs.

Why It Matters

As LLMs are increasingly integrated into various applications, understanding vulnerabilities like memory injection attacks is crucial for enhancing AI safety and security. This research highlights the potential risks associated with LLM agents and emphasizes the need for robust defenses against such attacks.

Key Takeaways

  • Introduces Memory Injection Attacks (MINJA) targeting LLM agents.
  • Attacks can be executed without direct memory modification, using only queries.
  • Demonstrates the effectiveness of MINJA through extensive experiments.
  • Highlights the risks of compromised memory in LLM applications.
  • Calls for improved security measures to protect LLM agents from such vulnerabilities.

Computer Science > Machine Learning arXiv:2503.03704 (cs) [Submitted on 5 Mar 2025 (v1), last revised 12 Feb 2026 (this version, v5)] Title:Memory Injection Attacks on LLM Agents via Query-Only Interaction Authors:Shen Dong, Shaochen Xu, Pengfei He, Yige Li, Jiliang Tang, Tianming Liu, Hui Liu, Zhen Xiang View a PDF of the paper titled Memory Injection Attacks on LLM Agents via Query-Only Interaction, by Shen Dong and 7 other authors View PDF HTML (experimental) Abstract:Agents powered by large language models (LLMs) have demonstrated strong capabilities in a wide range of complex, real-world applications. However, LLM agents with a compromised memory bank may easily produce harmful outputs when the past records retrieved for demonstration are malicious. In this paper, we propose a novel Memory INJection Attack, MINJA, without assuming that the attacker can directly modify the memory bank of the agent. The attacker injects malicious records into the memory bank by only interacting with the agent via queries and output observations. These malicious records are designed to elicit a sequence of malicious reasoning steps corresponding to a different target query during the agent's execution of the victim user's query. Specifically, we introduce a sequence of bridging steps to link victim queries to the malicious reasoning steps. During the memory injection, we propose an indication prompt that guides the agent to autonomously generate similar bridging steps, with a progressive...

Related Articles

Google’s Gemini AI can answer your questions with 3D models and simulations
Llms

Google’s Gemini AI can answer your questions with 3D models and simulations

Google's latest upgrade for Gemini will allow the chatbot to generate interactive 3D models and simulations in response to your questions...

The Verge - AI · 4 min ·
Moody’s Integrates AI Agents With Anthropic’s Claude
Llms

Moody’s Integrates AI Agents With Anthropic’s Claude

AI Tools & Products · 4 min ·
AI on the couch: Anthropic gives Claude 20 hours of psychiatry
Llms

AI on the couch: Anthropic gives Claude 20 hours of psychiatry

AI Tools & Products · 6 min ·
These AI Glasses Switch Between ChatGPT and Gemini. Why Don't More Wearables Do This?
Llms

These AI Glasses Switch Between ChatGPT and Gemini. Why Don't More Wearables Do This?

AI Tools & Products · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime