[2602.16596] Sequential Membership Inference Attacks

[2602.16596] Sequential Membership Inference Attacks

arXiv - Machine Learning 4 min read Article

Summary

The paper presents a novel approach to Membership Inference Attacks (MIAs) by developing an optimal attack strategy, SeMI*, leveraging model updates to enhance privacy audits.

Why It Matters

As AI models evolve through continuous updates, understanding the implications of these changes on privacy is crucial. This research addresses the gap in existing literature regarding dynamic models and MIAs, providing insights that can help improve data privacy measures in machine learning applications.

Key Takeaways

  • SeMI* optimally utilizes model updates to enhance MI attacks.
  • Accessing model sequences can strengthen MI signals compared to static models.
  • The study demonstrates practical applications of SeMI* across various data distributions.
  • Tighter privacy audits can be achieved by tuning insertion times and canaries.
  • Existing asymptotic analyses are validated through empirical results.

Computer Science > Machine Learning arXiv:2602.16596 (cs) [Submitted on 18 Feb 2026] Title:Sequential Membership Inference Attacks Authors:Thomas Michel, Debabrota Basu, Emilie Kaufmann View a PDF of the paper titled Sequential Membership Inference Attacks, by Thomas Michel and 2 other authors View PDF HTML (experimental) Abstract:Modern AI models are not static. They go through multiple updates in their lifecycles. Thus, exploiting the model dynamics to create stronger Membership Inference (MI) attacks and tighter privacy audits are timely questions. Though the literature empirically shows that using a sequence of model updates can increase the power of MI attacks, rigorous analysis of the `optimal' MI attacks is limited to static models with infinite samples. Hence, we develop an `optimal' MI attack, SeMI*, that uses the sequence of model updates to identify the presence of a target inserted at a certain update step. For the empirical mean computation, we derive the optimal power of SeMI*, while accessing a finite number of samples with or without privacy. Our results retrieve the existing asymptotic analysis. We observe that having access to the model sequence avoids the dilution of MI signals unlike the existing attacks on the final model, where the MI signal vanishes as training data accumulates. Furthermore, an adversary can use SeMI* to tune both the insertion time and the canary to yield tighter privacy audits. Finally, we conduct experiments across data distributi...

Related Articles

Google quietly releases an offline-first AI dictation app on iOS | TechCrunch
Machine Learning

Google quietly releases an offline-first AI dictation app on iOS | TechCrunch

Google's new offline-first dictation app uses Gemma AI models to take on the apps like Wispr Flow.

TechCrunch - AI · 4 min ·
Machine Learning

How well do you understand how AI/deep learning works?

Specifically, how AI are programmed, trained, and how they perform their functions. I’ll be asking this in different subs to see if/how t...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

a fun survey to look at how consumers perceive the use of AI in fashion brand marketing. (all ages, all genders)

Hi r/artificial ! I'm posting on behalf of a friend who is conducting academic research for their dissertation. The survey looks at how c...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

I Built a Functional Cognitive Engine

Aura: https://github.com/youngbryan97/aura Aura is not a chatbot with personality prompts. It is a complete cognitive architecture — 60+ ...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime