[2602.14938] Variance-Reduced $(\varepsilon,δ)-$Unlearning using Forget Set Gradients
Summary
This paper introduces the Variance-Reduced Unlearning (VRU) algorithm, which improves the $( ext{ε, δ})$-unlearning process by directly incorporating forget set gradients, enhancing performance over existing methods.
Why It Matters
As data privacy concerns grow, effective unlearning methods are essential for machine learning models. The VRU algorithm provides a formalized approach to ensure that data can be removed from models while maintaining performance, addressing a critical gap in current methodologies.
Key Takeaways
- VRU is the first algorithm to incorporate forget set gradients directly into the unlearning process.
- The algorithm provides formal guarantees for effective data removal from trained models.
- VRU demonstrates improved convergence rates compared to existing $( ext{ε, δ})$-unlearning methods.
- Empirical results show VRU outperforms state-of-the-art certified unlearning techniques.
- The research addresses a significant need for robust unlearning methods in machine learning.
Computer Science > Machine Learning arXiv:2602.14938 (cs) [Submitted on 16 Feb 2026] Title:Variance-Reduced $(\varepsilon,δ)-$Unlearning using Forget Set Gradients Authors:Martin Van Waerebeke, Marco Lorenzi, Kevin Scaman, El Mahdi El Mhamdi, Giovanni Neglia View a PDF of the paper titled Variance-Reduced $(\varepsilon,\delta)-$Unlearning using Forget Set Gradients, by Martin Van Waerebeke and 3 other authors View PDF Abstract:In machine unlearning, $(\varepsilon,\delta)-$unlearning is a popular framework that provides formal guarantees on the effectiveness of the removal of a subset of training data, the forget set, from a trained model. For strongly convex objectives, existing first-order methods achieve $(\varepsilon,\delta)-$unlearning, but they only use the forget set to calibrate injected noise, never as a direct optimization signal. In contrast, efficient empirical heuristics often exploit the forget samples (e.g., via gradient ascent) but come with no formal unlearning guarantees. We bridge this gap by presenting the Variance-Reduced Unlearning (VRU) algorithm. To the best of our knowledge, VRU is the first first-order algorithm that directly includes forget set gradients in its update rule, while provably satisfying ($(\varepsilon,\delta)-$unlearning. We establish the convergence of VRU and show that incorporating the forget set yields strictly improved rates, i.e. a better dependence on the achieved error compared to existing first-order $(\varepsilon,\delta)-$un...