[2603.19375] Automated Membership Inference Attacks: Discovering MIA Signal Computations using LLM Agents
About this article
Abstract page for arXiv paper 2603.19375: Automated Membership Inference Attacks: Discovering MIA Signal Computations using LLM Agents
Computer Science > Cryptography and Security arXiv:2603.19375 (cs) [Submitted on 19 Mar 2026] Title:Automated Membership Inference Attacks: Discovering MIA Signal Computations using LLM Agents Authors:Toan Tran, Olivera Kotevska, Li Xiong View a PDF of the paper titled Automated Membership Inference Attacks: Discovering MIA Signal Computations using LLM Agents, by Toan Tran and 2 other authors View PDF Abstract:Membership inference attacks (MIAs), which enable adversaries to determine whether specific data points were part of a model's training dataset, have emerged as an important framework to understand, assess, and quantify the potential information leakage associated with machine learning systems. Designing effective MIAs is a challenging task that usually requires extensive manual exploration of model behaviors to identify potential vulnerabilities. In this paper, we introduce AutoMIA -- a novel framework that leverages large language model (LLM) agents to automate the design and implementation of new MIA signal computations. By utilizing LLM agents, we can systematically explore a vast space of potential attack strategies, enabling the discovery of novel strategies. Our experiments demonstrate AutoMIA can successfully discover new MIAs that are specifically tailored to user-configured target model and dataset, resulting in improvements of up to 0.18 in absolute AUC over existing MIAs. This work provides the first demonstration that LLM agents can serve as an effectiv...