[2506.14003] Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs
About this article
Abstract page for arXiv paper 2506.14003: Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs
Computer Science > Machine Learning arXiv:2506.14003 (cs) [Submitted on 16 Jun 2025 (v1), last revised 2 Mar 2026 (this version, v4)] Title:Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs Authors:Yiwei Chen, Soumyadeep Pal, Yimeng Zhang, Qing Qu, Sijia Liu View a PDF of the paper titled Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs, by Yiwei Chen and 4 other authors View PDF Abstract:Machine unlearning (MU) for large language models (LLMs), commonly referred to as LLM unlearning, seeks to remove specific undesirable data or knowledge from a trained model, while maintaining its performance on standard tasks. While unlearning plays a vital role in protecting data privacy, enforcing copyright, and mitigating sociotechnical harms in LLMs, we identify a new vulnerability post-unlearning: unlearning trace detection. We discover that unlearning leaves behind persistent "fingerprints" in LLMs, detectable traces in both model behavior and internal representations. These traces can be identified from output responses, even when prompted with forget-irrelevant inputs. Specifically, even a simple supervised classifier can determine whether a model has undergone unlearning, using only its prediction logits or even its textual outputs. Further analysis shows that these traces are embedded in intermediate activations and propagate nonlinearly to the final layer, forming low-dimensional, learnable manifolds in activat...