[2602.19215] Understanding Empirical Unlearning with Combinatorial Interpretability
Summary
This article explores the concept of empirical unlearning in machine learning, focusing on how knowledge can persist in models even after attempts to erase it, using a framework of combinatorial interpretability.
Why It Matters
Understanding empirical unlearning is crucial as it addresses the challenges of knowledge retention in machine learning models, particularly in contexts where data privacy and model interpretability are paramount. This research sheds light on the effectiveness of unlearning methods and their implications for AI safety and ethical AI development.
Key Takeaways
- Empirical unlearning methods may not fully erase knowledge from models.
- Combinatorial interpretability allows for direct inspection of model knowledge.
- Knowledge can persist and resurface despite attempts to remove it.
- The study evaluates unlearning methods based on their effectiveness and recoverability of erased knowledge.
- Insights from this research are vital for improving model transparency and ethical AI practices.
Computer Science > Machine Learning arXiv:2602.19215 (cs) [Submitted on 22 Feb 2026] Title:Understanding Empirical Unlearning with Combinatorial Interpretability Authors:Shingo Kodama, Niv Cohen, Micah Adler, Nir Shavit View a PDF of the paper titled Understanding Empirical Unlearning with Combinatorial Interpretability, by Shingo Kodama and 3 other authors View PDF HTML (experimental) Abstract:While many recent methods aim to unlearn or remove knowledge from pretrained models, seemingly erased knowledge often persists and can be recovered in various ways. Because large foundation models are far from interpretable, understanding whether and how such knowledge persists remains a significant challenge. To address this, we turn to the recently developed framework of combinatorial interpretability. This framework, designed for two-layer neural networks, enables direct inspection of the knowledge encoded in the model weights. We reproduce baseline unlearning methods within the combinatorial interpretability setting and examine their behavior along two dimensions: (i) whether they truly remove knowledge of a target concept (the concept we wish to remove) or merely inhibit its expression while retaining the underlying information, and (ii) how easily the supposedly erased knowledge can be recovered through various fine-tuning operations. Our results shed light within a fully interpretable setting on how knowledge can persist despite unlearning and when it might resurface. Subject...