[2603.21054] Harmful Visual Content Manipulation Matters in Misinformation Detection Under Multimedia Scenarios
About this article
Abstract page for arXiv paper 2603.21054: Harmful Visual Content Manipulation Matters in Misinformation Detection Under Multimedia Scenarios
Computer Science > Machine Learning arXiv:2603.21054 (cs) [Submitted on 22 Mar 2026] Title:Harmful Visual Content Manipulation Matters in Misinformation Detection Under Multimedia Scenarios Authors:Bing Wang, Ximing Li, Changchun Li, Jinjin Chi, Tianze Li, Renchu Guan, Shengsheng Wang View a PDF of the paper titled Harmful Visual Content Manipulation Matters in Misinformation Detection Under Multimedia Scenarios, by Bing Wang and 6 other authors View PDF HTML (experimental) Abstract:Nowadays, the widespread dissemination of misinformation across numerous social media platforms has led to severe negative effects on society. To address this challenge, the automatic detection of misinformation, particularly under multimedia scenarios, has gained significant attention from both academic and industrial communities, leading to the emergence of a research task known as Multimodal Misinformation Detection (MMD). Typically, current MMD approaches focus on capturing the semantic relationships and inconsistency between various modalities but often overlook certain critical indicators within multimodal content. Recent research has shown that manipulated features within visual content in social media articles serve as valuable clues for MMD. Meanwhile, we argue that the potential intentions behind the manipulation, e.g., harmful and harmless, also matter in MMD. Therefore, in this study, we aim to identify such multimodal misinformation by capturing two types of features: manipulation ...