[2603.00823] A Comprehensive Evaluation of LLM Unlearning Robustness under Multi-Turn Interaction
About this article
Abstract page for arXiv paper 2603.00823: A Comprehensive Evaluation of LLM Unlearning Robustness under Multi-Turn Interaction
Computer Science > Computation and Language arXiv:2603.00823 (cs) [Submitted on 28 Feb 2026] Title:A Comprehensive Evaluation of LLM Unlearning Robustness under Multi-Turn Interaction Authors:Ruihao Pan, Suhang Wang View a PDF of the paper titled A Comprehensive Evaluation of LLM Unlearning Robustness under Multi-Turn Interaction, by Ruihao Pan and 1 other authors View PDF Abstract:Machine unlearning aims to remove the influence of specific training data from pre-trained models without retraining from scratch, and is increasingly important for large language models (LLMs) due to safety, privacy, and legal concerns. Although prior work primarily evaluates unlearning in static, single-turn settings, forgetting robustness under realistic interactive use remains underexplored. In this paper, we study whether unlearning remains stable in interactive environments by examining two common interaction patterns: self-correction and dialogue-conditioned querying. We find that knowledge appearing forgotten in static evaluation can often be recovered through interaction. Although stronger unlearning improves apparent robustness, it often results in behavioral rigidity rather than genuine knowledge erasure. Our findings suggest that static evaluation may overestimate real-world effectiveness and highlight the need for ensuring stable forgetting under interactive settings. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.00823 [cs.CL] (or ...