[2602.18934] LoMime: Query-Efficient Membership Inference using Model Extraction in Label-Only Settings

[2602.18934] LoMime: Query-Efficient Membership Inference using Model Extraction in Label-Only Settings

arXiv - Machine Learning 4 min read Article

Summary

The paper presents LoMime, a novel framework for membership inference attacks that operates efficiently under label-only conditions, significantly reducing query costs while maintaining high accuracy.

Why It Matters

Membership inference attacks pose serious privacy risks in machine learning. LoMime addresses these risks by providing a cost-effective method that requires fewer queries, making it more practical for real-world applications. This advancement is crucial for enhancing data privacy and security in AI systems.

Key Takeaways

  • LoMime reduces the query requirements for membership inference attacks to about 1% of training samples.
  • The framework operates under strict black-box constraints, making it applicable in various real-world scenarios.
  • LoMime matches the performance of existing state-of-the-art methods while significantly lowering resource costs.
  • Active sampling and perturbation-based selection are key techniques used in the model extraction process.
  • The study evaluates the effectiveness of current defenses against label-only MIAs, providing insights into their vulnerabilities.

Computer Science > Machine Learning arXiv:2602.18934 (cs) [Submitted on 21 Feb 2026] Title:LoMime: Query-Efficient Membership Inference using Model Extraction in Label-Only Settings Authors:Abdullah Caglar Oksuz, Anisa Halimi, Erman Ayday View a PDF of the paper titled LoMime: Query-Efficient Membership Inference using Model Extraction in Label-Only Settings, by Abdullah Caglar Oksuz and 2 other authors View PDF HTML (experimental) Abstract:Membership inference attacks (MIAs) threaten the privacy of machine learning models by revealing whether a specific data point was used during training. Existing MIAs often rely on impractical assumptions such as access to public datasets, shadow models, confidence scores, or training data distribution knowledge and making them vulnerable to defenses like confidence masking and adversarial regularization. Label-only MIAs, even under strict constraints suffer from high query requirements per sample. We propose a cost-effective label-only MIA framework based on transferability and model extraction. By querying the target model M using active sampling, perturbation-based selection, and synthetic data, we extract a functionally similar surrogate S on which membership inference is performed. This shifts query overhead to a one-time extraction phase, eliminating repeated queries to M . Operating under strict black-box constraints, our method matches the performance of state-of-the-art label-only MIAs while significantly reducing query costs. ...

Related Articles

Machine Learning

Ml project user give dataset and I give best model [D] [P]

Tl,dr : suggest me a solution to create a ai ml project where user will give his dataset as input and the project should give best model ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML Reviewer Acknowledgement

Hi, I'm a little confused about ICML discussion period Does the period for reviewer acknowledging responses have already ended? One of th...

Reddit - Machine Learning · 1 min ·
Llms

Claude Opus 4.6 API at 40% below Anthropic pricing – try free before you pay anything

Hey everyone I've set up a self-hosted API gateway using [New-API](QuantumNous/new-ap) to manage and distribute Claude Opus 4.6 access ac...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] ICML reviewer making up false claim in acknowledgement, what to do?

In a rebuttal acknowledgement we received, the reviewer made up a claim that our method performs worse than baselines with some hyperpara...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime