[2603.28942] ReproMIA: A Comprehensive Analysis of Model Reprogramming for Proactive Membership Inference Attacks
About this article
Abstract page for arXiv paper 2603.28942: ReproMIA: A Comprehensive Analysis of Model Reprogramming for Proactive Membership Inference Attacks
Computer Science > Machine Learning arXiv:2603.28942 (cs) [Submitted on 30 Mar 2026 (v1), last revised 4 Apr 2026 (this version, v2)] Title:ReproMIA: A Comprehensive Analysis of Model Reprogramming for Proactive Membership Inference Attacks Authors:Chihan Huang, Huaijin Wang, Shuai Wang View a PDF of the paper titled ReproMIA: A Comprehensive Analysis of Model Reprogramming for Proactive Membership Inference Attacks, by Chihan Huang and 2 other authors View PDF HTML (experimental) Abstract:The pervasive deployment of deep learning models across critical domains has concurrently intensified privacy concerns due to their inherent propensity for data memorization. While Membership Inference Attacks (MIAs) serve as the gold standard for auditing these privacy vulnerabilities, conventional MIA paradigms are increasingly constrained by the prohibitive computational costs of shadow model training and a precipitous performance degradation under low False Positive Rate constraints. To overcome these challenges, we introduce a novel perspective by leveraging the principles of model reprogramming as an active signal amplifier for privacy leakage. Building upon this insight, we present \texttt{ReproMIA}, a unified and efficient proactive framework for membership inference. We rigorously substantiate, both theoretically and empirically, how our methodology proactively induces and magnifies latent privacy footprints embedded within the model's representations. We provide specialized ins...