[2602.18446] ReportLogic: Evaluating Logical Quality in Deep Research Reports
Summary
The paper introduces ReportLogic, a benchmark for evaluating the logical quality of reports generated by Large Language Models (LLMs), focusing on auditability and structured reasoning.
Why It Matters
As reliance on LLMs grows, ensuring the logical integrity of generated reports is crucial for trust and usability. ReportLogic addresses the gap in current evaluation frameworks by providing a structured approach to assess the logical quality of these reports, which is essential for effective decision-making based on AI-generated content.
Key Takeaways
- ReportLogic introduces a hierarchical taxonomy for evaluating logical quality in LLM-generated reports.
- The framework emphasizes the importance of auditability in assessing the reliability of AI-generated content.
- A human-annotated dataset and an open-source tool, LogicJudge, are developed for scalable evaluation.
- The study reveals that existing LLM judges can be misled by superficial cues, highlighting the need for robust evaluation methods.
- Findings provide actionable insights for enhancing the logical reliability of AI-generated reports.
Computer Science > Computation and Language arXiv:2602.18446 (cs) [Submitted on 27 Jan 2026] Title:ReportLogic: Evaluating Logical Quality in Deep Research Reports Authors:Jujia Zhao, Zhaoxin Huan, Zihan Wang, Xiaolu Zhang, Jun Zhou, Suzan Verberne, Zhaochun Ren View a PDF of the paper titled ReportLogic: Evaluating Logical Quality in Deep Research Reports, by Jujia Zhao and 6 other authors View PDF HTML (experimental) Abstract:Users increasingly rely on Large Language Models (LLMs) for Deep Research, using them to synthesize diverse sources into structured reports that support understanding and action. In this context, the practical reliability of such reports hinges on logical quality: whether the report's claims and arguments are explicitly supported and can be trusted as a basis for downstream use, rather than merely appearing fluent or informative. However, current evaluation frameworks largely overlook this requirement. To bridge this gap, we introduce ReportLogic, a benchmark that quantifies report-level logical quality through a reader-centric lens of auditability. Specifically, ReportLogic adopts a hierarchical taxonomy that evaluates whether readers can (1) trace an on-topic report structure with a unified analytical arc (Macro-Logic), (2) understand the progression with necessary context (Expositional-Logic), and (3) verify conclusions via explicit claim--support (Structural-Logic). Based on this taxonomy, we construct a human-annotated rubric-guided dataset and...