[2601.20568] Reinforcement Unlearning via Group Relative Policy Optimization
Summary
This article presents a novel method called PURGE for reinforcement unlearning in large language models, addressing the challenge of safely removing sensitive data without retraining.
Why It Matters
As large language models (LLMs) increasingly handle sensitive information, compliance with regulations like GDPR is critical. The PURGE method offers a solution that enhances data privacy while maintaining model performance, making it relevant for developers and researchers in AI safety and compliance.
Key Takeaways
- PURGE enables effective unlearning of sensitive data in LLMs.
- The method improves model fluency and robustness while ensuring compliance with legal frameworks.
- Achieves significant reductions in token usage compared to existing methods.
- Demonstrates a new approach to framing unlearning as a verifiable task.
- Maintains high utility of the model while achieving unlearning effectiveness.
Computer Science > Machine Learning arXiv:2601.20568 (cs) [Submitted on 28 Jan 2026 (v1), last revised 18 Feb 2026 (this version, v2)] Title:Reinforcement Unlearning via Group Relative Policy Optimization Authors:Efstratios Zaradoukas, Bardh Prenkaj, Gjergji Kasneci View a PDF of the paper titled Reinforcement Unlearning via Group Relative Policy Optimization, by Efstratios Zaradoukas and 2 other authors View PDF HTML (experimental) Abstract:During pretraining, LLMs inadvertently memorize sensitive or copyrighted data, posing significant compliance challenges under legal frameworks like the GDPR and the EU AI Act. Fulfilling these mandates demands techniques that can remove information from a deployed model without retraining from scratch. Existing unlearning approaches attempt to address this need, but often leak the very data they aim to erase, sacrifice fluency and robustness, or depend on costly external reward models. We introduce PURGE (Policy Unlearning through Relative Group Erasure), a novel method grounded in the Group Relative Policy Optimization framework that formulates unlearning as a verifiable problem. PURGE uses an intrinsic reward signal that penalizes any mention of forbidden concepts, allowing safe and consistent unlearning. Our approach achieves up to x46 lower token usage per target than state-of-the-art methods, while improving fluency by +5.48% and adversarial robustness by +12.02% over the base model. Extensive evaluation on the Real World Knowledg...