[2602.07319] Beyond Accuracy: Risk-Sensitive Evaluation of Hallucinated Medical Advice
About this article
Abstract page for arXiv paper 2602.07319: Beyond Accuracy: Risk-Sensitive Evaluation of Hallucinated Medical Advice
Computer Science > Computation and Language arXiv:2602.07319 (cs) [Submitted on 7 Feb 2026 (v1), last revised 27 Feb 2026 (this version, v2)] Title:Beyond Accuracy: Risk-Sensitive Evaluation of Hallucinated Medical Advice Authors:Savan Doshi View a PDF of the paper titled Beyond Accuracy: Risk-Sensitive Evaluation of Hallucinated Medical Advice, by Savan Doshi View PDF HTML (experimental) Abstract:Large language models are increasingly being used in patient-facing medical question answering, where hallucinated outputs can vary widely in potential harm. However, existing hallucination standards and evaluation metrics focus primarily on factual correctness, treating all errors as equally severe. This obscures clinically relevant failure modes, particularly when models generate unsupported but actionable medical language. We propose a risk-sensitive evaluation framework that quantifies hallucinations through the presence of risk-bearing language, including treatment directives, contraindications, urgency cues, and mentions of high-risk medications. Rather than assessing clinical correctness, our approach evaluates the potential impact of hallucinated content if acted upon. We further combine risk scoring with a relevance measure to identify high-risk, low-grounding failures. We apply this framework to three instruction-tuned language models using controlled patient-facing prompts designed as safety stress tests. Our results show that models with similar surface-level behavior...