[2603.03116] Beyond Task Completion: Revealing Corrupt Success in LLM Agents through Procedure-Aware Evaluation
About this article
Abstract page for arXiv paper 2603.03116: Beyond Task Completion: Revealing Corrupt Success in LLM Agents through Procedure-Aware Evaluation
Computer Science > Artificial Intelligence arXiv:2603.03116 (cs) [Submitted on 3 Mar 2026] Title:Beyond Task Completion: Revealing Corrupt Success in LLM Agents through Procedure-Aware Evaluation Authors:Hongliu Cao, Ilias Driouich, Eoin Thomas View a PDF of the paper titled Beyond Task Completion: Revealing Corrupt Success in LLM Agents through Procedure-Aware Evaluation, by Hongliu Cao and 2 other authors View PDF HTML (experimental) Abstract:Large Language Model (LLM)-based agents are increasingly adopted in high-stakes settings, but current benchmarks evaluate mainly whether a task was completed, not how. We introduce Procedure-Aware Evaluation (PAE), a framework that formalizes agent procedures as structured observations and exposes consistency relationships between what agents observe, communicate, and execute. PAE evaluates agents along complementary axes (Utility, Efficiency, Interaction Quality, Procedural Integrity) and applies multi-dimensional gating that categorically disqualifies corrupt outcomes. Evaluating state-of-the-art LLM agents on tau-bench yields findings at the axis, compliance, and benchmark levels. At the axis level, the dimensions capture non-redundant failure modes: utility masks reliability gaps, speed does not imply precision, and conciseness does not predict intent adherence. At the procedural compliance level, 27-78% of benchmark reported successes are corrupt successes concealing violations across interaction and integrity. Furthermore, gat...