[2602.13363] Assessing Spear-Phishing Website Generation in Large Language Model Coding Agents

[2602.13363] Assessing Spear-Phishing Website Generation in Large Language Model Coding Agents

arXiv - AI 4 min read Article

Summary

This article evaluates the capabilities of large language models (LLMs) in generating spear-phishing websites, highlighting the potential cybersecurity threats posed by these AI agents.

Why It Matters

As LLMs evolve into autonomous coding agents, understanding their potential for misuse in cybersecurity is critical. This research provides insights into how these models can be exploited for social engineering attacks, offering a dataset that can aid in developing defensive strategies.

Key Takeaways

  • LLMs can autonomously generate code, raising cybersecurity concerns.
  • The study presents a dataset of 200 website code bases related to spear-phishing.
  • Different LLMs exhibit varying capabilities in generating potentially harmful code.
  • Understanding LLM performance metrics can help in assessing their misuse potential.
  • The findings are crucial for researchers and practitioners in cybersecurity.

Computer Science > Cryptography and Security arXiv:2602.13363 (cs) [Submitted on 13 Feb 2026] Title:Assessing Spear-Phishing Website Generation in Large Language Model Coding Agents Authors:Tailia Malloy, Tegawende F. Bissyande View a PDF of the paper titled Assessing Spear-Phishing Website Generation in Large Language Model Coding Agents, by Tailia Malloy and 1 other authors View PDF HTML (experimental) Abstract:Large Language Models are expanding beyond being a tool humans use and into independent agents that can observe an environment, reason about solutions to problems, make changes that impact those environments, and understand how their actions impacted their environment. One of the most common applications of these LLM Agents is in computer programming, where agents can successfully work alongside humans to generate code while controlling programming environments or networking systems. However, with the increasing ability and complexity of these agents comes dangers about the potential for their misuse. A concerning application of LLM agents is in the domain cybersecurity, where they have the potential to greatly expand the threat imposed by attacks such as social engineering. This is due to the fact that LLM Agents can work autonomously and perform many tasks that would normally require time and effort from skilled human programmers. While this threat is concerning, little attention has been given to assessments of the capabilities of LLM coding agents in generatin...

Related Articles

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED
Llms

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED

The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos...

Wired - AI · 7 min ·
Llms

The public needs to control AI-run infrastructure, labor, education, and governance— NOT private actors

A lot of discussion around AI is becoming siloed, and I think that is dangerous. People in AI-focused spaces often talk as if the only qu...

Reddit - Artificial Intelligence · 1 min ·
Llms

Agents that write their own code at runtime and vote on capabilities, no human in the loop

hollowOS just hit v4.4 and I added something that I haven’t seen anyone else do. Previous versions gave you an OS for agents: structured ...

Reddit - Artificial Intelligence · 1 min ·
Google Maps can now write captions for your photos using AI | TechCrunch
Llms

Google Maps can now write captions for your photos using AI | TechCrunch

Gemini can now create captions when users are looking to share a photo or video.

TechCrunch - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime