[2602.20418] CITED: A Decision Boundary-Aware Signature for GNNs Towards Model Extraction Defense

[2602.20418] CITED: A Decision Boundary-Aware Signature for GNNs Towards Model Extraction Defense

arXiv - Machine Learning 4 min read Article

Summary

The paper presents CITED, a novel framework for defending Graph Neural Networks (GNNs) against Model Extraction Attacks (MEAs) by providing ownership verification without compromising performance.

Why It Matters

As GNNs become increasingly prevalent in applications like recommendation systems, the risk of MEAs poses a significant threat to their integrity. CITED addresses these vulnerabilities by enabling robust ownership verification, which is crucial for maintaining trust in AI systems. This research contributes to the ongoing efforts to enhance AI security, particularly in MLaaS environments.

Key Takeaways

  • CITED offers a unique approach to ownership verification for GNNs.
  • The framework effectively defends against Model Extraction Attacks without degrading model performance.
  • CITED outperforms existing watermarking and fingerprinting methods.
  • The research highlights the importance of security in deploying GNNs in real-world applications.
  • Code for the CITED framework is publicly available for further research and application.

Computer Science > Machine Learning arXiv:2602.20418 (cs) [Submitted on 23 Feb 2026] Title:CITED: A Decision Boundary-Aware Signature for GNNs Towards Model Extraction Defense Authors:Bolin Shen, Md Shamim Seraj, Zhan Cheng, Shayok Chakraborty, Yushun Dong View a PDF of the paper titled CITED: A Decision Boundary-Aware Signature for GNNs Towards Model Extraction Defense, by Bolin Shen and 4 other authors View PDF HTML (experimental) Abstract:Graph neural networks (GNNs) have demonstrated superior performance in various applications, such as recommendation systems and financial risk management. However, deploying large-scale GNN models locally is particularly challenging for users, as it requires significant computational resources and extensive property data. Consequently, Machine Learning as a Service (MLaaS) has become increasingly popular, offering a convenient way to deploy and access various models, including GNNs. However, an emerging threat known as Model Extraction Attacks (MEAs) presents significant risks, as adversaries can readily obtain surrogate GNN models exhibiting similar functionality. Specifically, attackers repeatedly query the target model using subgraph inputs to collect corresponding responses. These input-output pairs are subsequently utilized to train their own surrogate models at minimal cost. Many techniques have been proposed to defend against MEAs, but most are limited to specific output levels (e.g., embedding or label) and suffer from inherent...

Related Articles

Machine Learning

[D] When to transition from simple heuristics to ML models (e.g., DensityFunction)?

Two questions: What are the recommendations around when to transition from a simple heuristic baseline to machine learning ML models for ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML 2026 Average Score

Hi all, I’m curious about the current review dynamics for ICML 2026, especially after the rebuttal phase. For those who are reviewers (or...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] VOID: Video Object and Interaction Deletion (physically-consistent video inpainting)

We present VOID, a model for video object removal that aims to handle *physical interactions*, not just appearance. Most existing video i...

Reddit - Machine Learning · 1 min ·
Machine Learning

FLUX 2 Pro (2026) Sketch to Image

I sketched a cow and tested how different models interpret it into a realistic image for downstream 3D generation, turns out some models ...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime