[2503.03170] AttackSeqBench: Benchmarking the Capabilities of LLMs for Attack Sequences Understanding
About this article
Abstract page for arXiv paper 2503.03170: AttackSeqBench: Benchmarking the Capabilities of LLMs for Attack Sequences Understanding
Computer Science > Cryptography and Security arXiv:2503.03170 (cs) [Submitted on 5 Mar 2025 (v1), last revised 3 Mar 2026 (this version, v3)] Title:AttackSeqBench: Benchmarking the Capabilities of LLMs for Attack Sequences Understanding Authors:Haokai Ma, Javier Yong, Yunshan Ma, Kuei Chen, Anis Yusof, Zhenkai Liang, Ee-Chien Chang View a PDF of the paper titled AttackSeqBench: Benchmarking the Capabilities of LLMs for Attack Sequences Understanding, by Haokai Ma and 6 other authors View PDF HTML (experimental) Abstract:Cyber Threat Intelligence (CTI) reports document observations of cyber threats, synthesizing evidence about adversaries' actions and intent into actionable knowledge that informs detection, response, and defense planning. However, the unstructured and verbose nature of CTI reports poses significant challenges for security practitioners to manually extract and analyze such sequences. Although large language models (LLMs) exhibit promise in cybersecurity tasks such as entity extraction and knowledge graph construction, their understanding and reasoning capabilities towards behavioral sequences remains underexplored. To address this, we introduce AttackSeqBench, a benchmark designed to systematically evaluate LLMs' reasoning abilities across the tactical, technical, and procedural dimensions of adversarial behaviors, while satisfying Extensibility, Reasoning Scalability, and Domain-dpecific Epistemic Expandability. We further benchmark 7 LLMs, 5 LRMs and 4 pos...