[2504.21205] SecRepoBench: Benchmarking Code Agents for Secure Code Completion in Real-World Repositories
Summary
The paper presents SecRepoBench, a benchmark designed to evaluate code agents' performance in secure code completion across real-world C/C++ repositories, revealing that code agents outperform standalone LLMs in generating secure code.
Why It Matters
As software security becomes increasingly critical, understanding the capabilities of code agents in generating secure code is vital. SecRepoBench provides a structured way to assess these tools, highlighting their strengths and weaknesses, which can guide future improvements in secure coding practices.
Key Takeaways
- SecRepoBench benchmarks 29 LLMs and 15 code agents on secure code completion.
- Code agents significantly outperform standalone LLMs in generating secure code.
- The benchmark includes 318 tasks across 27 C/C++ repositories, covering 15 CWEs.
- SecRepoBench is more challenging than previous benchmarks, indicating higher standards.
- Insights from the analysis suggest potential enhancements for code agents.
Computer Science > Cryptography and Security arXiv:2504.21205 (cs) [Submitted on 29 Apr 2025 (v1), last revised 14 Feb 2026 (this version, v3)] Title:SecRepoBench: Benchmarking Code Agents for Secure Code Completion in Real-World Repositories Authors:Chihao Shen, Connor Dilgren, Purva Chiniya, Luke Griffith, Yu Ding, Yizheng Chen View a PDF of the paper titled SecRepoBench: Benchmarking Code Agents for Secure Code Completion in Real-World Repositories, by Chihao Shen and 5 other authors View PDF HTML (experimental) Abstract:This paper introduces SecRepoBench, a benchmark to evaluate code agents on secure code completion in real-world repositories. SecRepoBench has 318 code completion tasks in 27 C/C++ repositories, covering 15 CWEs. We evaluate 29 standalone LLMs and 15 code agents across 3 state-of-the-art agent frameworks using our benchmark. We find that state-of-the-art LLMs struggle with generating correct and secure code completions. However, code agents significantly outperform standalone LLMs. We show that SecRepoBench is more difficult than the prior state-of-the-art benchmark. Finally, our comprehensive analysis provides insights into potential directions for enhancing the ability of code agents to write correct and secure code in real-world repositories. Subjects: Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI) Cite as: arXiv:2504.21205 [cs.CR] (or arXiv:2504.21205v3 [cs.CR] for this version) https://doi.org/10.48550/arXiv.2504.21205 Focus...