[2602.13156] In-Context Autonomous Network Incident Response: An End-to-End Large Language Model Agent Approach
Summary
This article presents a novel approach to network incident response using a large language model (LLM) that autonomously learns and adapts to cyber threats, improving response times significantly.
Why It Matters
As cyberattacks become increasingly sophisticated, traditional incident response methods struggle to keep pace. This research highlights the potential of LLMs to enhance incident response capabilities, offering a more efficient and adaptable solution that could significantly reduce recovery times and improve overall cybersecurity.
Key Takeaways
- The proposed LLM agent integrates perception, reasoning, planning, and action functionalities for incident response.
- It operates without the need for handcrafted modeling, making it accessible for use on standard hardware.
- The agent demonstrates in-context adaptation, refining its attack conjectures based on actual outcomes.
- Evaluation shows a recovery speed improvement of up to 23% compared to existing LLMs.
- This approach addresses the limitations of traditional reinforcement learning methods in cybersecurity.
Computer Science > Cryptography and Security arXiv:2602.13156 (cs) [Submitted on 13 Feb 2026] Title:In-Context Autonomous Network Incident Response: An End-to-End Large Language Model Agent Approach Authors:Yiran Gao, Kim Hammar, Tao Li View a PDF of the paper titled In-Context Autonomous Network Incident Response: An End-to-End Large Language Model Agent Approach, by Yiran Gao and 2 other authors View PDF Abstract:Rapidly evolving cyberattacks demand incident response systems that can autonomously learn and adapt to changing threats. Prior work has extensively explored the reinforcement learning approach, which involves learning response strategies through extensive simulation of the incident. While this approach can be effective, it requires handcrafted modeling of the simulator and suppresses useful semantics from raw system logs and alerts. To address these limitations, we propose to leverage large language models' (LLM) pre-trained security knowledge and in-context learning to create an end-to-end agentic solution for incident response planning. Specifically, our agent integrates four functionalities, perception, reasoning, planning, and action, into one lightweight LLM (14b model). Through fine-tuning and chain-of-thought reasoning, our LLM agent is capable of processing system logs and inferring the underlying network state (perception), updating its conjecture of attack models (reasoning), simulating consequences under different response strategies (planning), and ...