[2603.04459] Benchmark of Benchmarks: Unpacking Influence and Code Repository Quality in LLM Safety Benchmarks
About this article
Abstract page for arXiv paper 2603.04459: Benchmark of Benchmarks: Unpacking Influence and Code Repository Quality in LLM Safety Benchmarks
Computer Science > Cryptography and Security arXiv:2603.04459 (cs) [Submitted on 3 Mar 2026] Title:Benchmark of Benchmarks: Unpacking Influence and Code Repository Quality in LLM Safety Benchmarks Authors:Junjie Chu, Xinyue Shen, Ye Leng, Michael Backes, Yun Shen, Yang Zhang View a PDF of the paper titled Benchmark of Benchmarks: Unpacking Influence and Code Repository Quality in LLM Safety Benchmarks, by Junjie Chu and Xinyue Shen and Ye Leng and Michael Backes and Yun Shen and Yang Zhang View PDF HTML (experimental) Abstract:The rapid growth of research in LLM safety makes it hard to track all advances. Benchmarks are therefore crucial for capturing key trends and enabling systematic comparisons. Yet, it remains unclear why certain benchmarks gain prominence, and no systematic assessment has been conducted on their academic influence or code quality. This paper fills this gap by presenting the first multi-dimensional evaluation of the influence (based on five metrics) and code quality (based on both automated and human assessment) on LLM safety benchmarks, analyzing 31 benchmarks and 382 non-benchmarks across prompt injection, jailbreak, and hallucination. We find that benchmark papers show no significant advantage in academic influence (e.g., citation count and density) over non-benchmark papers. We uncover a key misalignment: while author prominence correlates with paper influence, neither author prominence nor paper influence shows a significant correlation with code ...