[2605.07905] CoCoReviewBench: A Completeness- and Correctness-Oriented Benchmark for AI Reviewers
About this article
Abstract page for arXiv paper 2605.07905: CoCoReviewBench: A Completeness- and Correctness-Oriented Benchmark for AI Reviewers
Computer Science > Computation and Language arXiv:2605.07905 (cs) [Submitted on 8 May 2026] Title:CoCoReviewBench: A Completeness- and Correctness-Oriented Benchmark for AI Reviewers Authors:Hexuan Deng, Xiaopeng Ke, Yichen Li, Ruina Hu, Dehao Huang, Derek F. Wong, Yue Wang, Xuebo Liu, Min Zhang View a PDF of the paper titled CoCoReviewBench: A Completeness- and Correctness-Oriented Benchmark for AI Reviewers, by Hexuan Deng and 8 other authors View PDF Abstract:Despite the rapid development of AI reviewers, evaluating such systems remains challenging: metrics favor overlap with human reviews over correctness. However, since human reviews often cover only a subset of salient issues and sometimes contain mistakes, they are unreliable as gold references. To address this, we build category-specific benchmark subsets and skip evaluation when the corresponding human reviews are missing to strengthen Completeness. We also leverage reviewer--author--meta-review discussions as expert annotations and filter unreliable reviews accordingly to strengthen Correctness. Finally, we introduce CoCoReviewBench, which curates 3,900 papers from ICLR and NeurIPS to enable reliable and fine-grained evaluation of AI reviewers. Analysis shows that AI reviewers remain limited in correctness and are prone to hallucinations, and highlights reasoning models as more effective reviewers, motivating further directions for improving AI reviewers. Benchmarks and models are available at this https URL. Com...