[2509.22263] Erase or Hide? Suppressing Spurious Unlearning Neurons for Robust Unlearning
About this article
Abstract page for arXiv paper 2509.22263: Erase or Hide? Suppressing Spurious Unlearning Neurons for Robust Unlearning
Computer Science > Machine Learning arXiv:2509.22263 (cs) [Submitted on 26 Sep 2025 (v1), last revised 4 Mar 2026 (this version, v2)] Title:Erase or Hide? Suppressing Spurious Unlearning Neurons for Robust Unlearning Authors:Nakyeong Yang, Dong-Kyum Kim, Jea Kwon, Minsung Kim, Kyomin Jung, Meeyoung Cha View a PDF of the paper titled Erase or Hide? Suppressing Spurious Unlearning Neurons for Robust Unlearning, by Nakyeong Yang and 5 other authors View PDF HTML (experimental) Abstract:Large language models trained on web-scale data can memorize private or sensitive knowledge, raising significant privacy risks. Although some unlearning methods mitigate these risks, they remain vulnerable to "relearning" during subsequent training, allowing a substantial portion of forgotten knowledge to resurface. In this paper, we show that widely used unlearning methods cause shallow alignment: instead of faithfully erasing target knowledge, they generate spurious unlearning neurons that amplify negative influence to hide it. To overcome this limitation, we introduce Ssiuu, a new class of unlearning methods that employs attribution-guided regularization to prevent spurious negative influence and faithfully remove target knowledge. Experimental results confirm that our method reliably erases target knowledge and outperforms strong baselines across two practical retraining scenarios: (1) adversarial injection of private data, and (2) benign attack using an instruction-following benchmark. Our...