[2604.06201] Beyond Facts: Benchmarking Distributional Reading Comprehension in Large Language Models
About this article
Abstract page for arXiv paper 2604.06201: Beyond Facts: Benchmarking Distributional Reading Comprehension in Large Language Models
Computer Science > Computation and Language arXiv:2604.06201 (cs) [Submitted on 13 Mar 2026] Title:Beyond Facts: Benchmarking Distributional Reading Comprehension in Large Language Models Authors:Pei-Fu Guo, Ya-An Tsai, Chun-Chia Hsu, Kai-Xin Chen, Yun-Da Tsai, Kai-Wei Chang, Nanyun Peng, Mi-Yen Yeh, Shou-De Lin View a PDF of the paper titled Beyond Facts: Benchmarking Distributional Reading Comprehension in Large Language Models, by Pei-Fu Guo and 8 other authors View PDF HTML (experimental) Abstract:While most reading comprehension benchmarks for LLMs focus on factual information that can be answered by localizing specific textual evidence, many real-world tasks require understanding distributional information, such as population-level trends and preferences expressed across collections of text. We introduce Text2DistBench, a reading comprehension benchmark for evaluating LLMs' ability to infer distributional knowledge from natural language. Built from real-world YouTube comments about movie and music entities, the benchmark provides models with entity metadata and associated comments, and requires them to answer distributional questions, such as estimating the proportions of positive and negative comments, or identifying the most and second most frequent topics discussed among viewers. To support reliable and long-term evaluation, the construction pipeline of Text2DistBench is fully automated and continuously updated to incorporate newly emerging entities over time. Exp...