[2510.22944] Is Your Prompt Poisoning Code? Defect Induction Rates and Security Mitigation Strategies

[2510.22944] Is Your Prompt Poisoning Code? Defect Induction Rates and Security Mitigation Strategies

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2510.22944: Is Your Prompt Poisoning Code? Defect Induction Rates and Security Mitigation Strategies

Computer Science > Cryptography and Security arXiv:2510.22944 (cs) [Submitted on 27 Oct 2025 (v1), last revised 8 May 2026 (this version, v2)] Title:Is Your Prompt Poisoning Code? Defect Induction Rates and Security Mitigation Strategies Authors:Bin Wang, YiLu Zhong, MiDi Wan, WenJie Yu, YuanBing Ouyang, Yenan Huang, Hui Li View a PDF of the paper titled Is Your Prompt Poisoning Code? Defect Induction Rates and Security Mitigation Strategies, by Bin Wang and 6 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have become indispensable for automated code generation, yet the quality and security of their outputs remain a critical concern. Existing studies predominantly concentrate on adversarial attacks or inherent flaws within the models. However, a more prevalent yet underexplored issue concerns how the quality of a benign but poorly formulated prompt affects the security of the generated code. To investigate this, we first propose an evaluation framework for prompt quality encompassing three key dimensions: goal clarity, information completeness, and logical consistency. Based on this framework, we construct and publicly release CWE-BENCH-PYTHON, a large-scale benchmark dataset containing tasks with prompts categorized into four distinct levels of normativity (L0-L3). Extensive experiments on multiple state-of-the-art LLMs reveal a clear correlation: as prompt normativity decreases, the likelihood of generating insecure code consistently and...

Originally published on May 11, 2026. Curated by AI News.

Related Articles

Researchers asked ChatGPT, Gemini and Claude which jobs are most exposed to AI. The chatbots wildly diagree
Llms

Researchers asked ChatGPT, Gemini and Claude which jobs are most exposed to AI. The chatbots wildly diagree

A study reveals that AI models disagree on which jobs are most vulnerable to automation, highlighting the unreliability of AI-generated e...

AI Tools & Products · 4 min ·
I stopped treating ChatGPT like Google — and everything suddenly clicked
Llms

I stopped treating ChatGPT like Google — and everything suddenly clicked

I stopped using ChatGPT like Google and started treating it like a thinking partner — here’s why that simple shift made the AI dramatical...

AI Tools & Products · 8 min ·
Hackers abuse Google ads, Claude.ai chats to push Mac malware
Llms

Hackers abuse Google ads, Claude.ai chats to push Mac malware

AI Tools & Products · 6 min ·
Llms

Does Claude dream of electric gavels? A federal case with Kansas connections sets an AI precedent.

AI Tools & Products ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime