[2602.15602] Certified Per-Instance Unlearning Using Individual Sensitivity Bounds

[2602.15602] Certified Per-Instance Unlearning Using Individual Sensitivity Bounds

arXiv - Machine Learning 3 min read Article

Summary

This article presents a novel approach to certified machine unlearning through adaptive per-instance noise calibration, significantly reducing performance degradation while ensuring privacy guarantees.

Why It Matters

As machine learning models increasingly handle sensitive data, the ability to unlearn specific data points while maintaining privacy is crucial. This research offers a promising method that enhances the practicality of unlearning in real-world applications, addressing both privacy concerns and model performance.

Key Takeaways

  • Introduces adaptive noise calibration for certified unlearning.
  • Demonstrates reduced noise injection compared to traditional methods.
  • Provides theoretical and empirical support for the proposed approach.
  • Focuses on individual data point sensitivity in unlearning processes.
  • Applicable to both linear and deep learning models.

Computer Science > Machine Learning arXiv:2602.15602 (cs) [Submitted on 17 Feb 2026] Title:Certified Per-Instance Unlearning Using Individual Sensitivity Bounds Authors:Hanna Benarroch (DI-ENS), Jamal Atif (CMAP), Olivier Cappé (DI-ENS) View a PDF of the paper titled Certified Per-Instance Unlearning Using Individual Sensitivity Bounds, by Hanna Benarroch (DI-ENS) and 2 other authors View PDF Abstract:Certified machine unlearning can be achieved via noise injection leading to differential privacy guarantees, where noise is calibrated to worst-case sensitivity. Such conservative calibration often results in performance degradation, limiting practical applicability. In this work, we investigate an alternative approach based on adaptive per-instance noise calibration tailored to the individual contribution of each data point to the learned solution. This raises the following challenge: how can one establish formal unlearning guarantees when the mechanism depends on the specific point to be removed? To define individual data point sensitivities in noisy gradient dynamics, we consider the use of per-instance differential privacy. For ridge regression trained via Langevin dynamics, we derive high-probability per-instance sensitivity bounds, yielding certified unlearning with substantially less noise injection. We corroborate our theoretical findings through experiments in linear settings and provide further empirical evidence on the relevance of the approach in deep learning set...

Related Articles

[2604.01676] GPA: Learning GUI Process Automation from Demonstrations
Llms

[2604.01676] GPA: Learning GUI Process Automation from Demonstrations

Abstract page for arXiv paper 2604.01676: GPA: Learning GUI Process Automation from Demonstrations

arXiv - AI · 3 min ·
[2604.01413] Adaptive Stopping for Multi-Turn LLM Reasoning
Llms

[2604.01413] Adaptive Stopping for Multi-Turn LLM Reasoning

Abstract page for arXiv paper 2604.01413: Adaptive Stopping for Multi-Turn LLM Reasoning

arXiv - AI · 4 min ·
[2603.13842] Fine-tuning is Not Enough: A Parallel Framework for Collaborative Imitation and Reinforcement Learning in End-to-end Autonomous Driving
Machine Learning

[2603.13842] Fine-tuning is Not Enough: A Parallel Framework for Collaborative Imitation and Reinforcement Learning in End-to-end Autonomous Driving

Abstract page for arXiv paper 2603.13842: Fine-tuning is Not Enough: A Parallel Framework for Collaborative Imitation and Reinforcement L...

arXiv - AI · 4 min ·
[2603.12510] Red-Teaming Vision-Language-Action Models via Quality Diversity Prompt Generation for Robust Robot Policies
Machine Learning

[2603.12510] Red-Teaming Vision-Language-Action Models via Quality Diversity Prompt Generation for Robust Robot Policies

Abstract page for arXiv paper 2603.12510: Red-Teaming Vision-Language-Action Models via Quality Diversity Prompt Generation for Robust Ro...

arXiv - AI · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime