[2511.01144] AthenaBench: A Dynamic Benchmark for Evaluating LLMs in Cyber Threat Intelligence
Summary
The paper presents AthenaBench, a dynamic benchmark designed to evaluate large language models (LLMs) in the context of Cyber Threat Intelligence (CTI), highlighting the limitations of current models in reasoning tasks.
Why It Matters
As cyber threats evolve, effective analysis and response are critical. AthenaBench aims to enhance the evaluation of LLMs in CTI, addressing gaps in current benchmarks and emphasizing the need for models that can handle complex reasoning tasks. This research is significant for improving cybersecurity measures and automating threat analysis.
Key Takeaways
- AthenaBench extends the CTIBench framework with improved dataset creation and evaluation metrics.
- Proprietary LLMs outperform open-source models but still struggle with reasoning-intensive CTI tasks.
- The study underscores the need for LLMs specifically designed for Cyber Threat Intelligence workflows.
Computer Science > Cryptography and Security arXiv:2511.01144 (cs) [Submitted on 3 Nov 2025 (v1), last revised 14 Feb 2026 (this version, v2)] Title:AthenaBench: A Dynamic Benchmark for Evaluating LLMs in Cyber Threat Intelligence Authors:Md Tanvirul Alam, Dipkamal Bhusal, Salman Ahmad, Nidhi Rastogi, Peter Worth View a PDF of the paper titled AthenaBench: A Dynamic Benchmark for Evaluating LLMs in Cyber Threat Intelligence, by Md Tanvirul Alam and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have demonstrated strong capabilities in natural language reasoning, yet their application to Cyber Threat Intelligence (CTI) remains limited. CTI analysis involves distilling large volumes of unstructured reports into actionable knowledge, a process where LLMs could substantially reduce analyst workload. CTIBench introduced a comprehensive benchmark for evaluating LLMs across multiple CTI tasks. In this work, we extend CTIBench by developing AthenaBench, an enhanced benchmark that includes an improved dataset creation pipeline, duplicate removal, refined evaluation metrics, and a new task focused on risk mitigation strategies. We evaluate twelve LLMs, including state-of-the-art proprietary models such as GPT-5 and Gemini-2.5 Pro, alongside seven open-source models from the LLaMA and Qwen families. While proprietary LLMs achieve stronger results overall, their performance remains subpar on reasoning-intensive tasks, such as threat actor attributio...