[2603.20746] Adversarial Attacks on Locally Private Graph Neural Networks
About this article
Abstract page for arXiv paper 2603.20746: Adversarial Attacks on Locally Private Graph Neural Networks
Computer Science > Machine Learning arXiv:2603.20746 (cs) [Submitted on 21 Mar 2026] Title:Adversarial Attacks on Locally Private Graph Neural Networks Authors:Matta Varun (Indian Institute of Technology Kharagpur, India), Ajay Kumar Dhakar (Indian Institute of Technology Kharagpur, India), Yuan Hong (University of Connecticut, USA), Shamik Sural (Indian Institute of Technology Kharagpur, India) View a PDF of the paper titled Adversarial Attacks on Locally Private Graph Neural Networks, by Matta Varun (Indian Institute of Technology Kharagpur and 7 other authors View PDF Abstract:Graph neural network (GNN) is a powerful tool for analyzing graph-structured data. However, their vulnerability to adversarial attacks raises serious concerns, especially when dealing with sensitive information. Local Differential Privacy (LDP) offers a privacy-preserving framework for training GNNs, but its impact on adversarial robustness remains underexplored. This paper investigates adversarial attacks on LDP-protected GNNs. We explore how the privacy guarantees of LDP can be leveraged or hindered by adversarial perturbations. The effectiveness of existing attack methods on LDP-protected GNNs are analyzed and potential challenges in crafting adversarial examples under LDP constraints are discussed. Additionally, we suggest directions for defending LDP-protected GNNs against adversarial attacks. This work investigates the interplay between privacy and security in graph learning, highlighting th...