[2602.17483] What Do LLMs Associate with Your Name? A Human-Centered Black-Box Audit of Personal Data
Summary
This article presents a human-centered audit of how large language models (LLMs) associate personal data with individual names, highlighting privacy concerns and user perceptions.
Why It Matters
As LLMs increasingly interact with personal data, understanding their associations with user identities is crucial for privacy rights and ethical AI development. This study provides insights into user concerns and the need for transparency in AI systems.
Key Takeaways
- LLMs can confidently generate personal data associations for individuals.
- A significant portion of users desire control over how their data is represented by AI.
- The study introduces a new tool, LMP2, for auditing personal data associations in LLMs.
Computer Science > Human-Computer Interaction arXiv:2602.17483 (cs) [Submitted on 19 Feb 2026] Title:What Do LLMs Associate with Your Name? A Human-Centered Black-Box Audit of Personal Data Authors:Dimitri Staufer, Kirsten Morehouse View a PDF of the paper titled What Do LLMs Associate with Your Name? A Human-Centered Black-Box Audit of Personal Data, by Dimitri Staufer and 1 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs), and conversational agents based on them, are exposed to personal data (PD) during pre-training and during user interactions. Prior work shows that PD can resurface, yet users lack insight into how strongly models associate specific information to their identity. We audit PD across eight LLMs (3 open-source; 5 API-based, including GPT-4o), introduce LMP2 (Language Model Privacy Probe), a human-centered, privacy-preserving audit tool refined through two formative studies (N=20), and run two studies with EU residents to capture (i) intuitions about LLM-generated PD (N1=155) and (ii) reactions to tool output (N2=303). We show empirically that models confidently generate multiple PD categories for well-known individuals. For everyday users, GPT-4o generates 11 features with 60% or more accuracy (e.g., gender, hair color, languages). Finally, 72% of participants sought control over model-generated associations with their name, raising questions about what counts as PD and whether data privacy rights should extend to LLMs. Subj...