[2603.19532] EvidenceRL: Reinforcing Evidence Consistency for Trustworthy Language Models
About this article
Abstract page for arXiv paper 2603.19532: EvidenceRL: Reinforcing Evidence Consistency for Trustworthy Language Models
Computer Science > Computation and Language arXiv:2603.19532 (cs) [Submitted on 20 Mar 2026] Title:EvidenceRL: Reinforcing Evidence Consistency for Trustworthy Language Models Authors:J. Ben Tamo, Yuxing Lu, Benoit L. Marteau, Micky C. Nnamdi, May D. Wang View a PDF of the paper titled EvidenceRL: Reinforcing Evidence Consistency for Trustworthy Language Models, by J. Ben Tamo and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are fluent but prone to hallucinations, producing answers that appear plausible yet are unsupported by available evidence. This failure is especially problematic in high-stakes domains where decisions must be justified by verifiable information. We introduce \textbf{EvidenceRL}, a reinforcement learning framework that enforces evidence adherence during training. EvidenceRL scores candidate responses for grounding (entailment with retrieved evidence and context) and correctness (agreement with reference answers) and optimizes the generator using Group Relative Policy Optimization (GRPO). We evaluate across two high-stakes domains, cardiac diagnosis and legal reasoning, where EvidenceRL consistently improves evidence grounding and faithfulness without sacrificing task accuracy. On cardiac diagnosis, F1@3 increases from 37.0 to 54.5 on Llama-3.2-3B while grounding ($G_{\max}@3$) rises from 47.6 to 78.2; hallucinations drop nearly 5$\times$ and evidence-supported diagnoses increase from 31.8\% to 61.6\%. On legal reaso...