[2506.14003] Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs

[2506.14003] Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2506.14003: Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs

Computer Science > Machine Learning arXiv:2506.14003 (cs) [Submitted on 16 Jun 2025 (v1), last revised 2 Mar 2026 (this version, v4)] Title:Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs Authors:Yiwei Chen, Soumyadeep Pal, Yimeng Zhang, Qing Qu, Sijia Liu View a PDF of the paper titled Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs, by Yiwei Chen and 4 other authors View PDF Abstract:Machine unlearning (MU) for large language models (LLMs), commonly referred to as LLM unlearning, seeks to remove specific undesirable data or knowledge from a trained model, while maintaining its performance on standard tasks. While unlearning plays a vital role in protecting data privacy, enforcing copyright, and mitigating sociotechnical harms in LLMs, we identify a new vulnerability post-unlearning: unlearning trace detection. We discover that unlearning leaves behind persistent "fingerprints" in LLMs, detectable traces in both model behavior and internal representations. These traces can be identified from output responses, even when prompted with forget-irrelevant inputs. Specifically, even a simple supervised classifier can determine whether a model has undergone unlearning, using only its prediction logits or even its textual outputs. Further analysis shows that these traces are embedded in intermediate activations and propagate nonlinearly to the final layer, forming low-dimensional, learnable manifolds in activat...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

AI Has Broken the Internet

So the web has been breaking a lot lately. Vercel is down. GitHub is down. Claude is down. Cloudflare is down. AWS is down. Everything is...

Reddit - Artificial Intelligence · 1 min ·
Llms

LLM agents can trigger real actions now. But what actually stops them from executing?

We ran into a simple but important issue while building agents with tool calling: the model can propose actions but nothing actually enfo...

Reddit - Artificial Intelligence · 1 min ·
Llms

Are LLMs a Dead End? (Investors Just Bet $1 Billion on “Yes”)

| AI Reality Check | Cal Newport Chapters 0:00 What is Yan LeCun Up To? 14:55 How is it possible that LeCun could be right about LLM’s be...

Reddit - Artificial Intelligence · 1 min ·
Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project | TechCrunch
Llms

Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project | TechCrunch

The AI recruiting startup confirmed a security incident after an extortion hacking crew took credit for stealing data from the company's ...

TechCrunch - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime