[2502.15567] Model Privacy: A Unified Framework for Understanding Model Stealing Attacks and Defenses
About this article
Abstract page for arXiv paper 2502.15567: Model Privacy: A Unified Framework for Understanding Model Stealing Attacks and Defenses
Computer Science > Machine Learning arXiv:2502.15567 (cs) [Submitted on 21 Feb 2025 (v1), last revised 5 Apr 2026 (this version, v3)] Title:Model Privacy: A Unified Framework for Understanding Model Stealing Attacks and Defenses Authors:Ganghua Wang, Yuhong Yang, Jie Ding View a PDF of the paper titled Model Privacy: A Unified Framework for Understanding Model Stealing Attacks and Defenses, by Ganghua Wang and 2 other authors View PDF HTML (experimental) Abstract:The use of machine learning (ML) has become increasingly prevalent in various domains, highlighting the importance of understanding and ensuring its safety. One pressing concern is the vulnerability of ML applications to model stealing attacks. These attacks involve adversaries attempting to recover a learned model through limited query-response interactions, such as those found in cloud-based services or on-chip artificial intelligence interfaces. While existing literature proposes various attack and defense strategies, these often lack a theoretical foundation and standardized evaluation criteria. In response, this work presents a framework called ``Model Privacy'', providing a foundation for comprehensively analyzing model stealing attacks and defenses. We establish a rigorous formulation for the threat model and objectives, propose methods to quantify the goodness of attack and defense strategies, and analyze the fundamental tradeoffs between utility and privacy in ML models. Our developed theory offers valuab...