[2601.20568] Reinforcement Unlearning via Group Relative Policy Optimization

[2601.20568] Reinforcement Unlearning via Group Relative Policy Optimization

arXiv - Machine Learning 4 min read Article

Summary

This article presents a novel method called PURGE for reinforcement unlearning in large language models, addressing the challenge of safely removing sensitive data without retraining.

Why It Matters

As large language models (LLMs) increasingly handle sensitive information, compliance with regulations like GDPR is critical. The PURGE method offers a solution that enhances data privacy while maintaining model performance, making it relevant for developers and researchers in AI safety and compliance.

Key Takeaways

  • PURGE enables effective unlearning of sensitive data in LLMs.
  • The method improves model fluency and robustness while ensuring compliance with legal frameworks.
  • Achieves significant reductions in token usage compared to existing methods.
  • Demonstrates a new approach to framing unlearning as a verifiable task.
  • Maintains high utility of the model while achieving unlearning effectiveness.

Computer Science > Machine Learning arXiv:2601.20568 (cs) [Submitted on 28 Jan 2026 (v1), last revised 18 Feb 2026 (this version, v2)] Title:Reinforcement Unlearning via Group Relative Policy Optimization Authors:Efstratios Zaradoukas, Bardh Prenkaj, Gjergji Kasneci View a PDF of the paper titled Reinforcement Unlearning via Group Relative Policy Optimization, by Efstratios Zaradoukas and 2 other authors View PDF HTML (experimental) Abstract:During pretraining, LLMs inadvertently memorize sensitive or copyrighted data, posing significant compliance challenges under legal frameworks like the GDPR and the EU AI Act. Fulfilling these mandates demands techniques that can remove information from a deployed model without retraining from scratch. Existing unlearning approaches attempt to address this need, but often leak the very data they aim to erase, sacrifice fluency and robustness, or depend on costly external reward models. We introduce PURGE (Policy Unlearning through Relative Group Erasure), a novel method grounded in the Group Relative Policy Optimization framework that formulates unlearning as a verifiable problem. PURGE uses an intrinsic reward signal that penalizes any mention of forbidden concepts, allowing safe and consistent unlearning. Our approach achieves up to x46 lower token usage per target than state-of-the-art methods, while improving fluency by +5.48% and adversarial robustness by +12.02% over the base model. Extensive evaluation on the Real World Knowledg...

Related Articles

How to use the new ChatGPT app integrations, including DoorDash, Spotify, Uber, and others | TechCrunch
Llms

How to use the new ChatGPT app integrations, including DoorDash, Spotify, Uber, and others | TechCrunch

Learn how to use Spotify, Canva, Figma, Expedia, and other apps directly in ChatGPT.

TechCrunch - AI · 10 min ·
Anthropic Restricts Claude Agent Access Amid AI Automation Boom in Crypto
Llms

Anthropic Restricts Claude Agent Access Amid AI Automation Boom in Crypto

AI Tools & Products · 7 min ·
Is cutting ‘please’ when talking to ChatGPT better for the planet? An expert explains
Llms

Is cutting ‘please’ when talking to ChatGPT better for the planet? An expert explains

AI Tools & Products · 5 min ·
AI Desktop 98 lets you chat with Claude, ChatGPT, and Gemini through a Windows 98-inspired interface
Llms

AI Desktop 98 lets you chat with Claude, ChatGPT, and Gemini through a Windows 98-inspired interface

AI Tools & Products · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime