[2602.07058] SPARE: Self-distillation for PARameter-Efficient Removal
About this article
Abstract page for arXiv paper 2602.07058: SPARE: Self-distillation for PARameter-Efficient Removal
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.07058 (cs) [Submitted on 4 Feb 2026 (v1), last revised 25 Mar 2026 (this version, v2)] Title:SPARE: Self-distillation for PARameter-Efficient Removal Authors:Natnael Mola, Leonardo S. B. Pereira, Carolina R. Kelsch, Luis H. Arribas, Juan C. S. M. Avedillo View a PDF of the paper titled SPARE: Self-distillation for PARameter-Efficient Removal, by Natnael Mola and 4 other authors View PDF HTML (experimental) Abstract:Machine Unlearning aims to remove the influence of specific data or concepts from trained models while preserving overall performance, a capability increasingly required by data protection regulations and responsible AI practices. Despite recent progress, unlearning in text-to-image diffusion models remains challenging due to high computational costs and the difficulty of balancing effective forgetting with retention of unrelated concepts. We introduce Self-distillation for PARameter Efficient Removal (SPARE), a two-stage unlearning method for image generation that combines parameter localization with self-distillation. SPARE first identifies parameters most responsible for generation of the unwanted concepts using gradient-based saliency and constrains updates through sparse low rank adapters, ensuring lightweight, localized modifications. In a second stage, SPARE applies a self-distillation objective that overwrites the unwanted concept with a user-defined surrogate while preserving behavior...