[2604.03376] VERT: Reliable LLM Judges for Radiology Report Evaluation
About this article
Abstract page for arXiv paper 2604.03376: VERT: Reliable LLM Judges for Radiology Report Evaluation
Computer Science > Artificial Intelligence arXiv:2604.03376 (cs) [Submitted on 3 Apr 2026] Title:VERT: Reliable LLM Judges for Radiology Report Evaluation Authors:Federica Bologna, Jean-Philippe Corbeil, Matthew Wilkens, Asma Ben Abacha View a PDF of the paper titled VERT: Reliable LLM Judges for Radiology Report Evaluation, by Federica Bologna and 3 other authors View PDF HTML (experimental) Abstract:Current literature on radiology report evaluation has focused primarily on designing LLM-based metrics and fine-tuning small models for chest X-rays. However, it remains unclear whether these approaches are robust when applied to reports from other modalities and anatomies. Which model and prompt configurations are best suited to serve as LLM judges for radiology evaluation? We conduct a thorough correlation analysis between expert and LLM-based ratings. We compare three existing LLM-as-a-judge metrics (RadFact, GREEN, and FineRadScore) alongside VERT, our proposed LLM-based metric, using open- and closed-source models (reasoning and non-reasoning) of different sizes across two expert-annotated datasets, RadEval and RaTE-Eval, spanning multiple modalities and anatomies. We further evaluate few-shot approaches, ensembling, and parameter-efficient fine-tuning using RaTE-Eval. To better understand metric behavior, we perform a systematic error detection and categorization study to assess alignment of these metrics against expert judgments and identify areas of lower and higher a...