[2604.04815] LiveFact: A Dynamic, Time-Aware Benchmark for LLM-Driven Fake News Detection
About this article
Abstract page for arXiv paper 2604.04815: LiveFact: A Dynamic, Time-Aware Benchmark for LLM-Driven Fake News Detection
Computer Science > Computation and Language arXiv:2604.04815 (cs) [Submitted on 6 Apr 2026] Title:LiveFact: A Dynamic, Time-Aware Benchmark for LLM-Driven Fake News Detection Authors:Cheng Xu, Changhong Jin, Yingjie Niu, Nan Yan, Yuke Mei, Shuhao Guan, Liming Chen, M-Tahar Kechadi View a PDF of the paper titled LiveFact: A Dynamic, Time-Aware Benchmark for LLM-Driven Fake News Detection, by Cheng Xu and Changhong Jin and Yingjie Niu and Nan Yan and Yuke Mei and Shuhao Guan and Liming Chen and M-Tahar Kechadi View PDF HTML (experimental) Abstract:The rapid development of Large Language Models (LLMs) has transformed fake news detection and fact-checking tasks from simple classification to complex reasoning. However, evaluation frameworks have not kept pace. Current benchmarks are static, making them vulnerable to benchmark data contamination (BDC) and ineffective at assessing reasoning under temporal uncertainty. To address this, we introduce LiveFact a continuously updated benchmark that simulates the real-world "fog of war" in misinformation detection. LiveFact uses dynamic, temporal evidence sets to evaluate models on their ability to reason with evolving, incomplete information rather than on memorized knowledge. We propose a dual-mode evaluation: Classification Mode for final verification and Inference Mode for evidence-based reasoning, along with a component to monitor BDC explicitly. Tests with 22 LLMs show that open-source Mixture-of-Experts models, such as Qwen3-235...