[2602.16596] Sequential Membership Inference Attacks
Summary
The paper presents a novel approach to Membership Inference Attacks (MIAs) by developing an optimal attack strategy, SeMI*, leveraging model updates to enhance privacy audits.
Why It Matters
As AI models evolve through continuous updates, understanding the implications of these changes on privacy is crucial. This research addresses the gap in existing literature regarding dynamic models and MIAs, providing insights that can help improve data privacy measures in machine learning applications.
Key Takeaways
- SeMI* optimally utilizes model updates to enhance MI attacks.
- Accessing model sequences can strengthen MI signals compared to static models.
- The study demonstrates practical applications of SeMI* across various data distributions.
- Tighter privacy audits can be achieved by tuning insertion times and canaries.
- Existing asymptotic analyses are validated through empirical results.
Computer Science > Machine Learning arXiv:2602.16596 (cs) [Submitted on 18 Feb 2026] Title:Sequential Membership Inference Attacks Authors:Thomas Michel, Debabrota Basu, Emilie Kaufmann View a PDF of the paper titled Sequential Membership Inference Attacks, by Thomas Michel and 2 other authors View PDF HTML (experimental) Abstract:Modern AI models are not static. They go through multiple updates in their lifecycles. Thus, exploiting the model dynamics to create stronger Membership Inference (MI) attacks and tighter privacy audits are timely questions. Though the literature empirically shows that using a sequence of model updates can increase the power of MI attacks, rigorous analysis of the `optimal' MI attacks is limited to static models with infinite samples. Hence, we develop an `optimal' MI attack, SeMI*, that uses the sequence of model updates to identify the presence of a target inserted at a certain update step. For the empirical mean computation, we derive the optimal power of SeMI*, while accessing a finite number of samples with or without privacy. Our results retrieve the existing asymptotic analysis. We observe that having access to the model sequence avoids the dilution of MI signals unlike the existing attacks on the final model, where the MI signal vanishes as training data accumulates. Furthermore, an adversary can use SeMI* to tune both the insertion time and the canary to yield tighter privacy audits. Finally, we conduct experiments across data distributi...