[2603.00587] Unlearning Evaluation through Subset Statistical Independence
About this article
Abstract page for arXiv paper 2603.00587: Unlearning Evaluation through Subset Statistical Independence
Computer Science > Machine Learning arXiv:2603.00587 (cs) [Submitted on 28 Feb 2026] Title:Unlearning Evaluation through Subset Statistical Independence Authors:Chenhao Zhang, Muxing Li, Feng Liu, Weitong Chen, Miao Xu View a PDF of the paper titled Unlearning Evaluation through Subset Statistical Independence, by Chenhao Zhang and 4 other authors View PDF HTML (experimental) Abstract:Evaluating machine unlearning remains challenging, as existing methods typically require retraining reference models or performing membership inference attacks, both of which rely on prior access to training configuration or supervision labels, making them impractical in realistic scenarios. Motivated by the fact that most unlearning algorithms remove a small, random subset of the training data, we propose a subset-level evaluation framework based on statistical independence. Specifically, we design a tailored use of the Hilbert-Schmidt Independence Criterion to assess whether the model outputs on a given subset exhibit statistical dependence, without requiring model retraining or auxiliary classifiers. Our method provides a simple, standalone evaluation procedure that aligns with unlearning workflows. Extensive experiments demonstrate that our approach reliably distinguishes in-training from out-of-training subsets and clearly differentiates unlearning effectiveness, even when existing evaluations fall short. Comments: Subjects: Machine Learning (cs.LG) Cite as: arXiv:2603.00587 [cs.LG] (o...