[2603.19293] LLM-MRD: LLM-Guided Multi-View Reasoning Distillation for Fake News Detection
About this article
Abstract page for arXiv paper 2603.19293: LLM-MRD: LLM-Guided Multi-View Reasoning Distillation for Fake News Detection
Computer Science > Computation and Language arXiv:2603.19293 (cs) [Submitted on 10 Mar 2026] Title:LLM-MRD: LLM-Guided Multi-View Reasoning Distillation for Fake News Detection Authors:Weilin Zhou, Shanwen Tan, Enhao Gu, Yurong Qian View a PDF of the paper titled LLM-MRD: LLM-Guided Multi-View Reasoning Distillation for Fake News Detection, by Weilin Zhou and 3 other authors View PDF HTML (experimental) Abstract:Multimodal fake news detection is crucial for mitigating societal disinformation. Existing approaches attempt to address this by fusing multimodal features or leveraging Large Language Models (LLMs) for advanced reasoning. However, these methods suffer from serious limitations, including a lack of comprehensive multi-view judgment and fusion, and prohibitive reasoning inefficiency due to the high computational costs of LLMs. To address these issues, we propose \textbf{LLM}-Guided \textbf{M}ulti-View \textbf{R}easoning \textbf{D}istillation for Fake News Detection ( \textbf{LLM-MRD}), a novel teacher-student framework. The Student Multi-view Reasoning module first constructs a comprehensive foundation from textual, visual, and cross-modal perspectives. Then, the Teacher Multi-view Reasoning module generates deep reasoning chains as rich supervision signals. Our core Calibration Distillation mechanism efficiently distills this complex reasoning-derived knowledge into the efficient student model. Experiments show LLM-MRD significantly outperforms state-of-the-art base...