[2509.23362] Dual-Space Smoothness for Robust and Balanced LLM Unlearning
About this article
Abstract page for arXiv paper 2509.23362: Dual-Space Smoothness for Robust and Balanced LLM Unlearning
Computer Science > Computation and Language arXiv:2509.23362 (cs) [Submitted on 27 Sep 2025 (v1), last revised 28 Mar 2026 (this version, v2)] Title:Dual-Space Smoothness for Robust and Balanced LLM Unlearning Authors:Han Yan, Zheyuan Liu, Meng Jiang View a PDF of the paper titled Dual-Space Smoothness for Robust and Balanced LLM Unlearning, by Han Yan and 2 other authors View PDF HTML (experimental) Abstract:As large language models evolve, Machine Unlearning has emerged to address growing concerns around user privacy, copyright infringement, and overall safety. Yet state-of-the-art (SOTA) unlearning methods often suffer from catastrophic forgetting and metric imbalance, for example, by over-optimizing one objective (e.g., unlearning effectiveness, utility preservation, or privacy protection) at the expense of others. In addition, small perturbations in the representation or parameter space can be exploited by relearn and jailbreak attacks. To address these challenges, we propose PRISM, a unified framework that enforces dual-space smoothness in representation and parameter spaces to improve robustness and balance unlearning metrics. PRISM consists of two smoothness optimization stages: (i) a representation space stage that employs a robustly trained probe to defend against jailbreak attacks, and (ii) a parameter-space stage that decouples retain-forget gradient conflicts, reduces imbalance, and smooths the parameter space to mitigate relearning attacks. Extensive experime...