[2502.08666] Hallucination, Monofacts, and Miscalibration: An Empirical Investigation
About this article
Abstract page for arXiv paper 2502.08666: Hallucination, Monofacts, and Miscalibration: An Empirical Investigation
Computer Science > Computation and Language arXiv:2502.08666 (cs) [Submitted on 11 Feb 2025 (v1), last revised 3 Mar 2026 (this version, v3)] Title:Hallucination, Monofacts, and Miscalibration: An Empirical Investigation Authors:Miranda Muqing Miao, Michael Kearns View a PDF of the paper titled Hallucination, Monofacts, and Miscalibration: An Empirical Investigation, by Miranda Muqing Miao and Michael Kearns View PDF HTML (experimental) Abstract:Hallucinated facts in large language models (LLMs) have recently been shown to obey a statistical lower bound determined by the monofact rate (related to the classical Good-Turing missing mass estimator) minus model miscalibration (Kalai & Vempala, 2024). We present the first empirical investigation of this three-way relationship in classical n-gram models and fine-tuned encoder-decoder Transformers. By generating training data from Pareto distributions with varying shape parameters, we systematically control the monofact rates and establish its positive relationship with hallucination. To bridge theory and practice, we derive an empirical analog of the hallucination bound by replacing the population miscalibration term (Section 2.1) with an empirical bin-wise KL divergence and confirm its practical viability. We then introduce selective upweighting -- a simple yet effective technique that strategically repeats as little as 5% of training examples -- to deliberately inject miscalibration into the model. This intervention reduces ha...