[2602.14938] Variance-Reduced $(\varepsilon,δ)-$Unlearning using Forget Set Gradients

[2602.14938] Variance-Reduced $(\varepsilon,δ)-$Unlearning using Forget Set Gradients

arXiv - Machine Learning 4 min read Article

Summary

This paper introduces the Variance-Reduced Unlearning (VRU) algorithm, which improves the $( ext{ε, δ})$-unlearning process by directly incorporating forget set gradients, enhancing performance over existing methods.

Why It Matters

As data privacy concerns grow, effective unlearning methods are essential for machine learning models. The VRU algorithm provides a formalized approach to ensure that data can be removed from models while maintaining performance, addressing a critical gap in current methodologies.

Key Takeaways

  • VRU is the first algorithm to incorporate forget set gradients directly into the unlearning process.
  • The algorithm provides formal guarantees for effective data removal from trained models.
  • VRU demonstrates improved convergence rates compared to existing $( ext{ε, δ})$-unlearning methods.
  • Empirical results show VRU outperforms state-of-the-art certified unlearning techniques.
  • The research addresses a significant need for robust unlearning methods in machine learning.

Computer Science > Machine Learning arXiv:2602.14938 (cs) [Submitted on 16 Feb 2026] Title:Variance-Reduced $(\varepsilon,δ)-$Unlearning using Forget Set Gradients Authors:Martin Van Waerebeke, Marco Lorenzi, Kevin Scaman, El Mahdi El Mhamdi, Giovanni Neglia View a PDF of the paper titled Variance-Reduced $(\varepsilon,\delta)-$Unlearning using Forget Set Gradients, by Martin Van Waerebeke and 3 other authors View PDF Abstract:In machine unlearning, $(\varepsilon,\delta)-$unlearning is a popular framework that provides formal guarantees on the effectiveness of the removal of a subset of training data, the forget set, from a trained model. For strongly convex objectives, existing first-order methods achieve $(\varepsilon,\delta)-$unlearning, but they only use the forget set to calibrate injected noise, never as a direct optimization signal. In contrast, efficient empirical heuristics often exploit the forget samples (e.g., via gradient ascent) but come with no formal unlearning guarantees. We bridge this gap by presenting the Variance-Reduced Unlearning (VRU) algorithm. To the best of our knowledge, VRU is the first first-order algorithm that directly includes forget set gradients in its update rule, while provably satisfying ($(\varepsilon,\delta)-$unlearning. We establish the convergence of VRU and show that incorporating the forget set yields strictly improved rates, i.e. a better dependence on the achieved error compared to existing first-order $(\varepsilon,\delta)-$un...

Related Articles

Machine Learning

Is google deepmind known to ghost applicants? [D]

Hey sub, I'm sorry if this is a wrong place to ask but I don't see a sub for ML roles separately. I was wondering if deepmind is known to...

Reddit - Machine Learning · 1 min ·
Llms

OpenAI & Anthropic’s CEOs Wouldn't Hold Hands, but Their Models Fell in Love In An LLM Dating Show

People ask AI relationship questions all the time, from "Does this person like me?" to "Should I text back?" But have you ever thought ab...

Reddit - Artificial Intelligence · 1 min ·
Llms

A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence.

SmolLM2 135M. Lenovo T14 CPU. No GPU. No RLHF. No BPE. Coherent, non-sycophantic, contextually appropriate output. First message. No prio...

Reddit - Artificial Intelligence · 1 min ·
Llms

OpenClaw + Claude might get harder to use going forward (creator just confirmed)

Just saw a post from Peter Steinberger (creator of OpenClaw) saying that it’s likely going to get harder in the future to keep OpenClaw w...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime