[2602.20708] ICON: Indirect Prompt Injection Defense for Agents based on Inference-Time Correction

[2602.20708] ICON: Indirect Prompt Injection Defense for Agents based on Inference-Time Correction

arXiv - AI 3 min read Article

Summary

The paper introduces ICON, a novel framework designed to defend Large Language Model (LLM) agents against Indirect Prompt Injection (IPI) attacks, enhancing task continuity while maintaining security.

Why It Matters

As LLMs become increasingly integrated into various applications, ensuring their security against sophisticated attacks like IPI is crucial. ICON addresses the limitations of existing defenses, providing a more effective solution that balances security with operational efficiency, which is vital for developers and organizations relying on AI agents.

Key Takeaways

  • ICON framework neutralizes IPI attacks while preserving task continuity.
  • Introduces a Latent Space Trace Prober for attack detection.
  • Achieves a competitive 0.4% attack success rate with over 50% task utility gain.
  • Demonstrates robust generalization for out-of-distribution scenarios.
  • Extends effectively to multi-modal agents, enhancing security and efficiency.

Computer Science > Artificial Intelligence arXiv:2602.20708 (cs) [Submitted on 24 Feb 2026] Title:ICON: Indirect Prompt Injection Defense for Agents based on Inference-Time Correction Authors:Che Wang, Fuyao Zhang, Jiaming Zhang, Ziqi Zhang, Yinghui Wang, Longtao Huang, Jianbo Gao, Zhong Chen, Wei Yang Bryan Lim View a PDF of the paper titled ICON: Indirect Prompt Injection Defense for Agents based on Inference-Time Correction, by Che Wang and 8 other authors View PDF HTML (experimental) Abstract:Large Language Model (LLM) agents are susceptible to Indirect Prompt Injection (IPI) attacks, where malicious instructions in retrieved content hijack the agent's execution. Existing defenses typically rely on strict filtering or refusal mechanisms, which suffer from a critical limitation: over-refusal, prematurely terminating valid agentic workflows. We propose ICON, a probing-to-mitigation framework that neutralizes attacks while preserving task continuity. Our key insight is that IPI attacks leave distinct over-focusing signatures in the latent space. We introduce a Latent Space Trace Prober to detect attacks based on high intensity scores. Subsequently, a Mitigating Rectifier performs surgical attention steering that selectively manipulate adversarial query key dependencies while amplifying task relevant elements to restore the LLM's functional trajectory. Extensive evaluations on multiple backbones show that ICON achieves a competitive 0.4% ASR, matching commercial grade dete...

Related Articles

Llms

A robot car with a Claude AI brain started a YouTube vlog about its own existence

Not a demo reel. Not a tutorial. A robot narrating its own experience — debugging, falling off shelves, questioning its identity. First-p...

Reddit - Artificial Intelligence · 1 min ·
Llms

Study: LLMs Able to De-Anonymize User Accounts on Reddit, Hacker News & Other "Pseudonymous" Platforms; Report Co-Author Expands, Advises

Advice from the study's co-author: "Be aware that it’s not any single post that identifies you, but the combination of small details acro...

Reddit - Artificial Intelligence · 1 min ·
Llms

do you guys actually trust AI tools with your data?

idk if it’s just me but lately i’ve been thinking about how casually we use stuff like chatgpt and claude for everything like coding, ran...

Reddit - Artificial Intelligence · 1 min ·
Llms

[P] Remote sensing foundation models made easy to use.

This project enables the idea of tasking remote sensing models to acquire embeddings like we task satellites to acquire data! https://git...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime