[2602.16901] AgentLAB: Benchmarking LLM Agents against Long-Horizon Attacks
Summary
The paper introduces AgentLAB, a benchmark for evaluating the vulnerability of LLM agents to long-horizon attacks, highlighting their susceptibility and the inadequacy of existing defenses.
Why It Matters
As LLM agents are increasingly used in complex environments, understanding their vulnerabilities to long-horizon attacks is crucial for developing effective security measures. AgentLAB provides a systematic approach to assess these risks, enabling researchers and developers to enhance the safety and reliability of AI systems.
Key Takeaways
- AgentLAB is the first benchmark for assessing LLM agents against long-horizon attacks.
- Five novel attack types are included, revealing significant vulnerabilities in current LLM agents.
- Existing defenses for single-turn interactions are ineffective against long-horizon threats.
- The benchmark supports 28 realistic environments and 644 test cases for comprehensive evaluation.
- AgentLAB aims to facilitate progress in securing LLM agents in practical applications.
Computer Science > Artificial Intelligence arXiv:2602.16901 (cs) [Submitted on 18 Feb 2026] Title:AgentLAB: Benchmarking LLM Agents against Long-Horizon Attacks Authors:Tanqiu Jiang, Yuhui Wang, Jiacheng Liang, Ting Wang View a PDF of the paper titled AgentLAB: Benchmarking LLM Agents against Long-Horizon Attacks, by Tanqiu Jiang and 3 other authors View PDF HTML (experimental) Abstract:LLM agents are increasingly deployed in long-horizon, complex environments to solve challenging problems, but this expansion exposes them to long-horizon attacks that exploit multi-turn user-agent-environment interactions to achieve objectives infeasible in single-turn settings. To measure agent vulnerabilities to such risks, we present AgentLAB, the first benchmark dedicated to evaluating LLM agent susceptibility to adaptive, long-horizon attacks. Currently, AgentLAB supports five novel attack types including intent hijacking, tool chaining, task injection, objective drifting, and memory poisoning, spanning 28 realistic agentic environments, and 644 security test cases. Leveraging AgentLAB, we evaluate representative LLM agents and find that they remain highly susceptible to long-horizon attacks; moreover, defenses designed for single-turn interactions fail to reliably mitigate long-horizon threats. We anticipate that AgentLAB will serve as a valuable benchmark for tracking progress on securing LLM agents in practical settings. The benchmark is publicly available at this https URL. Subject...