[2602.20419] CREDIT: Certified Ownership Verification of Deep Neural Networks Against Model Extraction Attacks

[2602.20419] CREDIT: Certified Ownership Verification of Deep Neural Networks Against Model Extraction Attacks

arXiv - Machine Learning 3 min read Article

Summary

The paper introduces CREDIT, a method for certified ownership verification of deep neural networks to combat model extraction attacks, ensuring robust model security.

Why It Matters

As Machine Learning as a Service (MLaaS) grows, the risk of model extraction attacks increases. CREDIT addresses the critical need for ownership verification, providing a theoretical framework that enhances security for DNN models, which is vital for developers and organizations relying on ML services.

Key Takeaways

  • CREDIT offers a certified method for verifying ownership of DNNs.
  • It employs mutual information to assess model similarity effectively.
  • The approach provides rigorous theoretical guarantees for ownership verification.
  • Extensive evaluations demonstrate state-of-the-art performance across various datasets.
  • Public implementation is available, promoting accessibility for further research.

Computer Science > Machine Learning arXiv:2602.20419 (cs) [Submitted on 23 Feb 2026] Title:CREDIT: Certified Ownership Verification of Deep Neural Networks Against Model Extraction Attacks Authors:Bolin Shen, Zhan Cheng, Neil Zhenqiang Gong, Fan Yao, Yushun Dong View a PDF of the paper titled CREDIT: Certified Ownership Verification of Deep Neural Networks Against Model Extraction Attacks, by Bolin Shen and 4 other authors View PDF HTML (experimental) Abstract:Machine Learning as a Service (MLaaS) has emerged as a widely adopted paradigm for providing access to deep neural network (DNN) models, enabling users to conveniently leverage these models through standardized APIs. However, such services are highly vulnerable to Model Extraction Attacks (MEAs), where an adversary repeatedly queries a target model to collect input-output pairs and uses them to train a surrogate model that closely replicates its functionality. While numerous defense strategies have been proposed, verifying the ownership of a suspicious model with strict theoretical guarantees remains a challenging task. To address this gap, we introduce CREDIT, a certified ownership verification against MEAs. Specifically, we employ mutual information to quantify the similarity between DNN models, propose a practical verification threshold, and provide rigorous theoretical guarantees for ownership verification based on this threshold. We extensively evaluate our approach on several mainstream datasets across differen...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[D] Physicist-turned-ML-engineer looking to get into ML research. What's worth working on and where can I contribute most?

After years of focus on building products, I'm carving out time to do independent research again and trying to find the right direction. ...

Reddit - Machine Learning · 1 min ·
PSA: Anyone with a link can view your Granola notes by default | The Verge
Machine Learning

PSA: Anyone with a link can view your Granola notes by default | The Verge

Granola, the AI-powered note-taking app, makes your notes viewable by anyone with a link by default. It also turns on AI training for any...

The Verge - AI · 5 min ·
Machine Learning

[D] On-Device Real-Time Visibility Restoration: Deterministic CV vs. Quantized ML Models. Looking for insights on Edge Preservation vs. Latency.

Hey everyone, We have been working on a real-time camera engine for iOS that currently uses a purely deterministic Computer Vision approa...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime