[2604.04418] Justified or Just Convincing? Error Verifiability as a Dimension of LLM Quality
About this article
Abstract page for arXiv paper 2604.04418: Justified or Just Convincing? Error Verifiability as a Dimension of LLM Quality
Computer Science > Human-Computer Interaction arXiv:2604.04418 (cs) [Submitted on 6 Apr 2026] Title:Justified or Just Convincing? Error Verifiability as a Dimension of LLM Quality Authors:Xiaoyuan Zhu, Kimberly Le Truong, Riccardo Fogliato, Gokul Swamy, Weijian Zhang, Minglai Yang, Longtian Ye, Bangya Liu, Minghao Liu, Andrew Ilyas, Steven Wu View a PDF of the paper titled Justified or Just Convincing? Error Verifiability as a Dimension of LLM Quality, by Xiaoyuan Zhu and 10 other authors View PDF HTML (experimental) Abstract:As LLMs are deployed in high-stakes settings, users must judge the correctness of individual responses, often relying on model-generated justifications such as reasoning chains or explanations. Yet, no standard measure exists for whether these justifications help users distinguish correct answers from incorrect ones. We formalize this idea as error verifiability and propose $v_{\text{bal}}$, a balanced metric that measures whether justifications enable raters to accurately assess answer correctness, validated against human raters who show high agreement. We find that neither common approaches, such as post-training and model scaling, nor more targeted interventions recommended improve verifiability. We introduce two methods that succeed at improving verifiability: reflect-and-rephrase (RR) for mathematical reasoning and oracle-rephrase (OR) for factual QA, both of which improve verifiability by incorporating domain-appropriate external information. To...