[2602.24195] Uncertainty Quantification for Multimodal Large Language Models with Incoherence-adjusted Semantic Volume
About this article
Abstract page for arXiv paper 2602.24195: Uncertainty Quantification for Multimodal Large Language Models with Incoherence-adjusted Semantic Volume
Computer Science > Artificial Intelligence arXiv:2602.24195 (cs) [Submitted on 27 Feb 2026] Title:Uncertainty Quantification for Multimodal Large Language Models with Incoherence-adjusted Semantic Volume Authors:Gregory Kang Ruey Lau, Hieu Dao, Nicole Kan Hui Lin, Bryan Kian Hsiang Low View a PDF of the paper titled Uncertainty Quantification for Multimodal Large Language Models with Incoherence-adjusted Semantic Volume, by Gregory Kang Ruey Lau and 3 other authors View PDF HTML (experimental) Abstract:Despite their capabilities, Multimodal Large Language Models (MLLMs) may produce plausible but erroneous outputs, hindering reliable deployment. Accurate uncertainty metrics could enable escalation of unreliable queries to human experts or larger models for improved performance. However, existing uncertainty metrics have practical constraints, such as being designed only for specific modalities, reliant on external tools, or computationally expensive. We introduce UMPIRE, a training-free uncertainty quantification framework for MLLMs that works efficiently across various input and output modalities without external tools, relying only on the models' own internal modality features. UMPIRE computes the incoherence-adjusted semantic volume of sampled MLLM responses for a given task instance, effectively capturing both the global semantic diversity of samples and the local incoherence of responses based on internal model confidence. We propose uncertainty desiderata for MLLMs an...