[2604.05306] LLMs Should Express Uncertainty Explicitly
About this article
Abstract page for arXiv paper 2604.05306: LLMs Should Express Uncertainty Explicitly
Computer Science > Machine Learning arXiv:2604.05306 (cs) [Submitted on 7 Apr 2026] Title:LLMs Should Express Uncertainty Explicitly Authors:Junyu Guo, Shangding Gu, Ming Jin, Costas Spanos, Javad Lavaei View a PDF of the paper titled LLMs Should Express Uncertainty Explicitly, by Junyu Guo and 4 other authors View PDF HTML (experimental) Abstract:Large language models are increasingly used in settings where uncertainty must drive decisions such as abstention, retrieval, and verification. Most existing methods treat uncertainty as a latent quantity to estimate after generation rather than a signal the model is trained to express. We instead study uncertainty as an interface for control. We compare two complementary interfaces: a global interface, where the model verbalizes a calibrated confidence score for its final answer, and a local interface, where the model emits an explicit <uncertain> marker during reasoning when it enters a high-risk state. These interfaces provide different but complementary benefits. Verbalized confidence substantially improves calibration, reduces overconfident errors, and yields the strongest overall Adaptive RAG controller while using retrieval more selectively. Reasoning-time uncertainty signaling makes previously silent failures visible during generation, improves wrong-answer coverage, and provides an effective high-recall retrieval trigger. Our findings further show that the two interfaces work differently internally: verbal confidence mai...