[2605.07830] CyBiasBench: Benchmarking Bias in LLM Agents for Cyber-Attack Scenarios

[2605.07830] CyBiasBench: Benchmarking Bias in LLM Agents for Cyber-Attack Scenarios

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2605.07830: CyBiasBench: Benchmarking Bias in LLM Agents for Cyber-Attack Scenarios

Computer Science > Cryptography and Security arXiv:2605.07830 (cs) [Submitted on 8 May 2026] Title:CyBiasBench: Benchmarking Bias in LLM Agents for Cyber-Attack Scenarios Authors:Taein Lim, Seongyong Ju, Munhyeok Kim, Hyunjun Kim, Hoki Kim View a PDF of the paper titled CyBiasBench: Benchmarking Bias in LLM Agents for Cyber-Attack Scenarios, by Taein Lim and 4 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly deployed as autonomous agents in offensive cybersecurity. In this paper, we reveal an interesting phenomenon: different agents exhibit distinct attack patterns. Specifically, each agent exhibits an attack-selection bias, disproportionately concentrating its efforts on a narrow subset of attack families regardless of prompt variations. To systematically quantify this behavior, we introduce CyBiasBench, a comprehensive 630-session benchmark that evaluates five agents on three targets and four prompt conditions with ten attack families. We identify explicit bias across agents, with different dominant attack families and varying entropy levels in their attack-family allocation distributions. Such bias is better characterized as a trait of the agents, rather than a factor associated with the attack success rate. Furthermore, our experiments reveal a bias momentum effect, where agents resist explicit steering toward attack families that conflict with their bias. This forced distribution shift does not yield measurable improvem...

Originally published on May 11, 2026. Curated by AI News.

Related Articles

Researchers asked ChatGPT, Gemini and Claude which jobs are most exposed to AI. The chatbots wildly diagree
Llms

Researchers asked ChatGPT, Gemini and Claude which jobs are most exposed to AI. The chatbots wildly diagree

A study reveals that AI models disagree on which jobs are most vulnerable to automation, highlighting the unreliability of AI-generated e...

AI Tools & Products · 4 min ·
I stopped treating ChatGPT like Google — and everything suddenly clicked
Llms

I stopped treating ChatGPT like Google — and everything suddenly clicked

I stopped using ChatGPT like Google and started treating it like a thinking partner — here’s why that simple shift made the AI dramatical...

AI Tools & Products · 8 min ·
Hackers abuse Google ads, Claude.ai chats to push Mac malware
Llms

Hackers abuse Google ads, Claude.ai chats to push Mac malware

AI Tools & Products · 6 min ·
Llms

Does Claude dream of electric gavels? A federal case with Kansas connections sets an AI precedent.

AI Tools & Products ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime