[2603.24967] The Anatomy of Uncertainty in LLMs
About this article
Abstract page for arXiv paper 2603.24967: The Anatomy of Uncertainty in LLMs
Computer Science > Artificial Intelligence arXiv:2603.24967 (cs) [Submitted on 26 Mar 2026] Title:The Anatomy of Uncertainty in LLMs Authors:Aditya Taparia, Ransalu Senanayake, Kowshik Thopalli, Vivek Narayanaswamy View a PDF of the paper titled The Anatomy of Uncertainty in LLMs, by Aditya Taparia and 3 other authors View PDF HTML (experimental) Abstract:Understanding why a large language model (LLM) is uncertain about the response is important for their reliable deployment. Current approaches, which either provide a single uncertainty score or rely on the classical aleatoric-epistemic dichotomy, fail to offer actionable insights for improving the generative model. Recent studies have also shown that such methods are not enough for understanding uncertainty in LLMs. In this work, we advocate for an uncertainty decomposition framework that dissects LLM uncertainty into three distinct semantic components: (i) input ambiguity, arising from ambiguous prompts; (ii) knowledge gaps, caused by insufficient parametric evidence; and (iii) decoding randomness, stemming from stochastic sampling. Through a series of experiments we demonstrate that the dominance of these components can shift across model size and task. Our framework provides a better understanding to audit LLM reliability and detect hallucinations, paving the way for targeted interventions and more trustworthy systems. Comments: Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2603.24967 [cs.AI] (or arXiv:260...