[2601.18650] FaLW: A Forgetting-aware Loss Reweighting for Long-tailed Unlearning
Summary
The paper introduces FaLW, a novel method for machine unlearning that addresses challenges in long-tailed data scenarios, enhancing data privacy compliance.
Why It Matters
As data privacy regulations become increasingly stringent, effective machine unlearning methods are essential. This research fills a critical gap by focusing on long-tailed distributions, which are common in real-world data, thereby improving the applicability of unlearning techniques in diverse contexts.
Key Takeaways
- FaLW addresses unlearning in long-tailed data distributions.
- It identifies and mitigates two key unlearning deviations: Heterogeneous and Skewed.
- The method employs a dynamic loss reweighting approach for improved performance.
- Extensive experiments validate the effectiveness of FaLW.
- This research contributes to the broader field of data privacy and machine learning.
Computer Science > Machine Learning arXiv:2601.18650 (cs) [Submitted on 26 Jan 2026 (v1), last revised 21 Feb 2026 (this version, v2)] Title:FaLW: A Forgetting-aware Loss Reweighting for Long-tailed Unlearning Authors:Liheng Yu, Zhe Zhao, Yuxuan Wang, Pengkun Wang, Xiaofeng Cao, Binwu Wang, Yang Wang View a PDF of the paper titled FaLW: A Forgetting-aware Loss Reweighting for Long-tailed Unlearning, by Liheng Yu and 6 other authors View PDF HTML (experimental) Abstract:Machine unlearning, which aims to efficiently remove the influence of specific data from trained models, is crucial for upholding data privacy regulations like the ``right to be forgotten". However, existing research predominantly evaluates unlearning methods on relatively balanced forget sets. This overlooks a common real-world scenario where data to be forgotten, such as a user's activity records, follows a long-tailed distribution. Our work is the first to investigate this critical research gap. We find that in such long-tailed settings, existing methods suffer from two key issues: \textit{Heterogeneous Unlearning Deviation} and \textit{Skewed Unlearning Deviation}. To address these challenges, we propose FaLW, a plug-and-play, instance-wise dynamic loss reweighting method. FaLW innovatively assesses the unlearning state of each sample by comparing its predictive probability to the distribution of unseen data from the same class. Based on this, it uses a forgetting-aware reweighting scheme, modulated by a...