[2603.19281] URAG: A Benchmark for Uncertainty Quantification in Retrieval-Augmented Large Language Models
About this article
Abstract page for arXiv paper 2603.19281: URAG: A Benchmark for Uncertainty Quantification in Retrieval-Augmented Large Language Models
Computer Science > Computation and Language arXiv:2603.19281 (cs) [Submitted on 2 Mar 2026] Title:URAG: A Benchmark for Uncertainty Quantification in Retrieval-Augmented Large Language Models Authors:Vinh Nguyen, Cuong Dang, Jiahao Zhang, Hoa Tran, Minh Tran, Trinh Chau, Thai Le, Lu Cheng, Suhang Wang View a PDF of the paper titled URAG: A Benchmark for Uncertainty Quantification in Retrieval-Augmented Large Language Models, by Vinh Nguyen and 8 other authors View PDF HTML (experimental) Abstract:Retrieval-Augmented Generation (RAG) has emerged as a widely adopted approach for enhancing LLMs in scenarios that demand extensive factual knowledge. However, current RAG evaluations concentrate primarily on correctness, which may not fully capture the impact of retrieval on LLM uncertainty and reliability. To bridge this gap, we introduce URAG, a comprehensive benchmark designed to assess the uncertainty of RAG systems across various fields like healthcare, programming, science, math, and general text. By reformulating open-ended generation tasks into multiple-choice question answering, URAG allows for principled uncertainty quantification via conformal prediction. We apply the evaluation pipeline to 8 standard RAG methods, measuring their performance through both accuracy and prediction-set sizes based on LAC and APS metrics. Our analysis shows that (1) accuracy gains often coincide with reduced uncertainty, but this relationship breaks under retrieval noise; (2) simple modular...