[2603.02788] Agentified Assessment of Logical Reasoning Agents
About this article
Abstract page for arXiv paper 2603.02788: Agentified Assessment of Logical Reasoning Agents
Computer Science > Artificial Intelligence arXiv:2603.02788 (cs) [Submitted on 3 Mar 2026] Title:Agentified Assessment of Logical Reasoning Agents Authors:Zhiyu Ni, Yifeng Xiao, Zheng Liang View a PDF of the paper titled Agentified Assessment of Logical Reasoning Agents, by Zhiyu Ni and 2 other authors View PDF HTML (experimental) Abstract:We present a framework for evaluating and benchmarking logical reasoning agents when assessment itself must be reproducible, auditable, and robust to execution failures. Building on agentified assessment, we use an assessor agent to issue tasks, enforce execution budgets, parse outputs, and record structured failure types, while the agent under test only needs to expose a standardized agent-to-agent interface. As a case study, we benchmark an auto-formalization agent for first-order logic (FOL) reasoning on a solver-verified and repaired split of FOLIO. The agent translates natural language premises and conclusions into executable Z3Py programs and employs satisfiability modulo theories (SMT) solving to determine logical entailment. On the cleaned FOLIO validation set, the auto-formalization agent achieves 86.70% accuracy under the assessor protocol, outperforming a chain-of-thought baseline (73.89%). Comments: Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2603.02788 [cs.AI] (or arXiv:2603.02788v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2603.02788 Focus to learn more arXiv-issued DOI via DataCite (pending ...