[2603.26105] Are LLM-Enhanced Graph Neural Networks Robust against Poisoning Attacks?
About this article
Abstract page for arXiv paper 2603.26105: Are LLM-Enhanced Graph Neural Networks Robust against Poisoning Attacks?
Computer Science > Machine Learning arXiv:2603.26105 (cs) [Submitted on 27 Mar 2026] Title:Are LLM-Enhanced Graph Neural Networks Robust against Poisoning Attacks? Authors:Yuhang Ma, Jie Wang, Zheng Yan View a PDF of the paper titled Are LLM-Enhanced Graph Neural Networks Robust against Poisoning Attacks?, by Yuhang Ma and 2 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have advanced Graph Neural Networks (GNNs) by enriching node representations with semantic features, giving rise to LLM-enhanced GNNs that achieve notable performance gains. However, the robustness of these models against poisoning attacks, which manipulate both graph structures and textual attributes during training, remains unexplored. To bridge this gap, we propose a robustness assessment framework that systematically evaluates LLM-enhanced GNNs under poisoning attacks. Our framework enables comprehensive evaluation across multiple dimensions. Specifically, we assess 24 victim models by combining eight LLM- or Language Model (LM)-based feature enhancers with three representative GNN backbones. To ensure diversity in attack coverage, we incorporate six structural poisoning attacks (both targeted and non-targeted) and three textual poisoning attacks operating at the character, word, and sentence levels. Furthermore, we employ four real-world datasets, including one released after the emergence of LLMs, to avoid potential ground truth leakage during LLM pretraining, thereb...