[2508.18473] Principled Detection of Hallucinations in Large Language Models via Multiple Testing
About this article
Abstract page for arXiv paper 2508.18473: Principled Detection of Hallucinations in Large Language Models via Multiple Testing
Computer Science > Computation and Language arXiv:2508.18473 (cs) [Submitted on 25 Aug 2025 (v1), last revised 28 Apr 2026 (this version, v3)] Title:Principled Detection of Hallucinations in Large Language Models via Multiple Testing Authors:Jiawei Li, Akshayaa Magesh, Venugopal V. Veeravalli View a PDF of the paper titled Principled Detection of Hallucinations in Large Language Models via Multiple Testing, by Jiawei Li and 2 other authors View PDF HTML (experimental) Abstract:While Large Language Models (LLMs) have emerged as powerful foundational models to solve a variety of tasks, they have also been shown to be prone to hallucinations, i.e., generating responses that sound confident but are actually incorrect or even nonsensical. Existing hallucination detectors propose a wide range of empirical scoring rules, but their performance varies across models and datasets, and it is hard to determine which ones to rely on in practice or to treat as a reliable detector. In this work, we formulate the problem of detecting hallucinations as a hypothesis testing problem and draw parallels with the problem of out-of-distribution detection in machine learning models. We then propose a multiple-testing-inspired method that systematically aggregates multiple evaluation scores via conformal p-values, enabling calibrated detection with controlled false alarm rate. Extensive experiments across diverse models and datasets validate the robustness of our approach against state-of-the-art m...