[2604.04074] FactReview: Evidence-Grounded Reviews with Literature Positioning and Execution-Based Claim Verification
About this article
Abstract page for arXiv paper 2604.04074: FactReview: Evidence-Grounded Reviews with Literature Positioning and Execution-Based Claim Verification
Computer Science > Artificial Intelligence arXiv:2604.04074 (cs) [Submitted on 5 Apr 2026] Title:FactReview: Evidence-Grounded Reviews with Literature Positioning and Execution-Based Claim Verification Authors:Hang Xu, Ling Yue, Chaoqian Ouyang, Libin Zheng, Shaowu Pan, Shimin Di, Min-Ling Zhang View a PDF of the paper titled FactReview: Evidence-Grounded Reviews with Literature Positioning and Execution-Based Claim Verification, by Hang Xu and 6 other authors View PDF HTML (experimental) Abstract:Peer review in machine learning is under growing pressure from rising submission volume and limited reviewer time. Most LLM-based reviewing systems read only the manuscript and generate comments from the paper's own narrative. This makes their outputs sensitive to presentation quality and leaves them weak when the evidence needed for review lies in related work or released code. We present FactReview, an evidence-grounded reviewing system that combines claim extraction, literature positioning, and execution-based claim verification. Given a submission, FactReview identifies major claims and reported results, retrieves nearby work to clarify the paper's technical position, and, when code is available, executes the released repository under bounded budgets to test central empirical claims. It then produces a concise review and an evidence report that assigns each major claim one of five labels: Supported, Supported by the paper, Partially supported, In conflict, or Inconclusive. In...