[2602.17692] Agentic Unlearning: When LLM Agent Meets Machine Unlearning
Summary
The paper introduces 'agentic unlearning,' a novel approach to remove sensitive information from both model parameters and memory in AI agents, addressing limitations of existing methods.
Why It Matters
As AI systems increasingly handle sensitive data, effective unlearning mechanisms are crucial for privacy and compliance. This research proposes a comprehensive framework that enhances data security in AI models, making it relevant for developers and researchers in machine learning and AI safety.
Key Takeaways
- Agentic unlearning targets both model parameters and memory pathways to enhance data privacy.
- The Synchronized Backflow Unlearning (SBU) framework integrates memory and parameter unlearning processes.
- Experiments demonstrate SBU's effectiveness in reducing sensitive information traces with minimal impact on model performance.
Computer Science > Machine Learning arXiv:2602.17692 (cs) [Submitted on 6 Feb 2026] Title:Agentic Unlearning: When LLM Agent Meets Machine Unlearning Authors:Bin Wang, Fan Wang, Pingping Wang, Jinyu Cong, Yang Yu, Yilong Yin, Zhongyi Han, Benzheng Wei View a PDF of the paper titled Agentic Unlearning: When LLM Agent Meets Machine Unlearning, by Bin Wang and 7 other authors View PDF HTML (experimental) Abstract:In this paper, we introduce \textbf{agentic unlearning} which removes specified information from both model parameters and persistent memory in agents with closed-loop interaction. Existing unlearning methods target parameters alone, leaving two critical gaps: (i) parameter-memory backflow, where retrieval reactivates parametric remnants or memory artifacts reintroduce sensitive content, and (ii) the absence of a unified strategy that covers both parameter and memory pathways. We present Synchronized Backflow Unlearning (SBU), a framework that unlearns jointly across parameter and memory pathways. The memory pathway performs dependency closure-based unlearning that prunes isolated entities while logically invalidating shared artifacts. The parameter pathway employs stochastic reference alignment to guide model outputs toward a high-entropy prior. These pathways are integrated via a synchronized dual-update protocol, forming a closed-loop mechanism where memory unlearning and parametric suppression reinforce each other to prevent cross-pathway recontamination. Experim...