[2604.06820] Beyond Surface Judgments: Human-Grounded Risk Evaluation of LLM-Generated Disinformation
About this article
Abstract page for arXiv paper 2604.06820: Beyond Surface Judgments: Human-Grounded Risk Evaluation of LLM-Generated Disinformation
Computer Science > Artificial Intelligence arXiv:2604.06820 (cs) [Submitted on 8 Apr 2026] Title:Beyond Surface Judgments: Human-Grounded Risk Evaluation of LLM-Generated Disinformation Authors:Zonghuan Xu, Xiang Zheng, Yutao Wu, Xingjun Ma View a PDF of the paper titled Beyond Surface Judgments: Human-Grounded Risk Evaluation of LLM-Generated Disinformation, by Zonghuan Xu and 3 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) can generate persuasive narratives at scale, raising concerns about their potential use in disinformation campaigns. Assessing this risk ultimately requires understanding how readers receive such content. In practice, however, LLM judges are increasingly used as a low-cost substitute for direct human evaluation, even though whether they faithfully track reader responses remains unclear. We recast evaluation in this setting as a proxy-validity problem and audit LLM judges against human reader responses. Using 290 aligned articles, 2,043 paired human ratings, and outputs from eight frontier judges, we examine judge--human alignment in terms of overall scoring, item-level ordering, and signal dependence. We find persistent judge--human gaps throughout. Relative to humans, judges are typically harsher, recover item-level human rankings only weakly, and rely on different textual signals, placing more weight on logical rigour while penalizing emotional intensity more strongly. At the same time, judges agree far more with on...