[2512.13352] On the Effectiveness of Membership Inference in Targeted Data Extraction from Large Language Models
About this article
Abstract page for arXiv paper 2512.13352: On the Effectiveness of Membership Inference in Targeted Data Extraction from Large Language Models
Computer Science > Machine Learning arXiv:2512.13352 (cs) [Submitted on 15 Dec 2025 (v1), last revised 26 Feb 2026 (this version, v3)] Title:On the Effectiveness of Membership Inference in Targeted Data Extraction from Large Language Models Authors:Ali Al Sahili, Ali Chehab, Razane Tajeddine View a PDF of the paper titled On the Effectiveness of Membership Inference in Targeted Data Extraction from Large Language Models, by Ali Al Sahili and 2 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are prone to memorizing training data, which poses serious privacy risks. Two of the most prominent concerns are training data extraction and Membership Inference Attacks (MIAs). Prior research has shown that these threats are interconnected: adversaries can extract training data from an LLM by querying the model to generate a large volume of text and subsequently applying MIAs to verify whether a particular data point was included in the training set. In this study, we integrate multiple MIA techniques into the data extraction pipeline to systematically benchmark their effectiveness. We then compare their performance in this integrated setting against results from conventional MIA benchmarks, allowing us to evaluate their practical utility in real-world extraction scenarios. Comments: Subjects: Machine Learning (cs.LG); Computation and Language (cs.CL); Cryptography and Security (cs.CR) Cite as: arXiv:2512.13352 [cs.LG] (or arXiv:2512.13352v3 [cs.LG] ...