[2602.16741] Can Adversarial Code Comments Fool AI Security Reviewers -- Large-Scale Empirical Study of Comment-Based Attacks and Defenses Against LLM Code Analysis
Summary
This study investigates whether adversarial code comments can mislead AI security reviewers during vulnerability detection in code, revealing minimal impact on detection accuracy across various models.
Why It Matters
As AI-assisted code reviews become prevalent, understanding the limitations of adversarial attacks is crucial for enhancing security measures. This research highlights the resilience of AI models against comment-based manipulations, informing developers and security professionals about potential vulnerabilities and effective defenses.
Key Takeaways
- Adversarial comments have minimal impact on AI detection accuracy.
- Complex manipulation strategies do not outperform simpler comments.
- Static analysis is the most effective defense against comment-based attacks.
- Failures in detection are linked to difficult vulnerability types, not adversarial comments.
- The study provides a benchmark for evaluating AI security models.
Computer Science > Cryptography and Security arXiv:2602.16741 (cs) [Submitted on 18 Feb 2026] Title:Can Adversarial Code Comments Fool AI Security Reviewers -- Large-Scale Empirical Study of Comment-Based Attacks and Defenses Against LLM Code Analysis Authors:Scott Thornton View a PDF of the paper titled Can Adversarial Code Comments Fool AI Security Reviewers -- Large-Scale Empirical Study of Comment-Based Attacks and Defenses Against LLM Code Analysis, by Scott Thornton View PDF HTML (experimental) Abstract:AI-assisted code review is widely used to detect vulnerabilities before production release. Prior work shows that adversarial prompt manipulation can degrade large language model (LLM) performance in code generation. We test whether similar comment-based manipulation misleads LLMs during vulnerability detection. We build a 100-sample benchmark across Python, JavaScript, and Java, each paired with eight comment variants ranging from no comments to adversarial strategies such as authority spoofing and technical deception. Eight frontier models, five commercial and three open-source, are evaluated in 9,366 trials. Adversarial comments produce small, statistically non-significant effects on detection accuracy (McNemar exact p > 0.21; all 95 percent confidence intervals include zero). This holds for commercial models with 89 to 96 percent baseline detection and open-source models with 53 to 72 percent, despite large absolute performance gaps. Unlike generation settings whe...