[2603.22854] Avoiding Over-smoothing in Social Media Rumor Detection with Pre-trained Propagation Tree Transformer
About this article
Abstract page for arXiv paper 2603.22854: Avoiding Over-smoothing in Social Media Rumor Detection with Pre-trained Propagation Tree Transformer
Computer Science > Computation and Language arXiv:2603.22854 (cs) [Submitted on 24 Mar 2026] Title:Avoiding Over-smoothing in Social Media Rumor Detection with Pre-trained Propagation Tree Transformer Authors:Chaoqun Cui, Caiyan Jia View a PDF of the paper titled Avoiding Over-smoothing in Social Media Rumor Detection with Pre-trained Propagation Tree Transformer, by Chaoqun Cui and 1 other authors View PDF HTML (experimental) Abstract:Deep learning techniques for rumor detection typically utilize Graph Neural Networks (GNNs) to analyze post relations. These methods, however, falter due to over-smoothing issues when processing rumor propagation structures, leading to declining performance. Our investigation into this issue reveals that over-smoothing is intrinsically tied to the structural characteristics of rumor propagation trees, in which the majority of nodes are 1-level nodes. Furthermore, GNNs struggle to capture long-range dependencies within these trees. To circumvent these challenges, we propose a Pre-Trained Propagation Tree Transformer (P2T3) method based on pure Transformer architecture. It extracts all conversation chains from a tree structure following the propagation direction of replies, utilizes token-wise embedding to infuse connection information and introduces necessary inductive bias, and pre-trains on large-scale unlabeled datasets. Experiments indicate that P2T3 surpasses previous state-of-the-art methods in multiple benchmark datasets and performs w...