[2602.15602] Certified Per-Instance Unlearning Using Individual Sensitivity Bounds
Summary
This article presents a novel approach to certified machine unlearning through adaptive per-instance noise calibration, significantly reducing performance degradation while ensuring privacy guarantees.
Why It Matters
As machine learning models increasingly handle sensitive data, the ability to unlearn specific data points while maintaining privacy is crucial. This research offers a promising method that enhances the practicality of unlearning in real-world applications, addressing both privacy concerns and model performance.
Key Takeaways
- Introduces adaptive noise calibration for certified unlearning.
- Demonstrates reduced noise injection compared to traditional methods.
- Provides theoretical and empirical support for the proposed approach.
- Focuses on individual data point sensitivity in unlearning processes.
- Applicable to both linear and deep learning models.
Computer Science > Machine Learning arXiv:2602.15602 (cs) [Submitted on 17 Feb 2026] Title:Certified Per-Instance Unlearning Using Individual Sensitivity Bounds Authors:Hanna Benarroch (DI-ENS), Jamal Atif (CMAP), Olivier Cappé (DI-ENS) View a PDF of the paper titled Certified Per-Instance Unlearning Using Individual Sensitivity Bounds, by Hanna Benarroch (DI-ENS) and 2 other authors View PDF Abstract:Certified machine unlearning can be achieved via noise injection leading to differential privacy guarantees, where noise is calibrated to worst-case sensitivity. Such conservative calibration often results in performance degradation, limiting practical applicability. In this work, we investigate an alternative approach based on adaptive per-instance noise calibration tailored to the individual contribution of each data point to the learned solution. This raises the following challenge: how can one establish formal unlearning guarantees when the mechanism depends on the specific point to be removed? To define individual data point sensitivities in noisy gradient dynamics, we consider the use of per-instance differential privacy. For ridge regression trained via Langevin dynamics, we derive high-probability per-instance sensitivity bounds, yielding certified unlearning with substantially less noise injection. We corroborate our theoretical findings through experiments in linear settings and provide further empirical evidence on the relevance of the approach in deep learning set...