[2601.06162] Forget Many, Forget Right: Scalable and Precise Concept Unlearning in Diffusion Models
About this article
Abstract page for arXiv paper 2601.06162: Forget Many, Forget Right: Scalable and Precise Concept Unlearning in Diffusion Models
Computer Science > Machine Learning arXiv:2601.06162 (cs) [Submitted on 6 Jan 2026 (v1), last revised 3 Apr 2026 (this version, v2)] Title:Forget Many, Forget Right: Scalable and Precise Concept Unlearning in Diffusion Models Authors:Kaiyuan Deng, Gen Li, Yang Xiao, Bo Hui, Xiaolong Ma View a PDF of the paper titled Forget Many, Forget Right: Scalable and Precise Concept Unlearning in Diffusion Models, by Kaiyuan Deng and 4 other authors View PDF HTML (experimental) Abstract:Text-to-image diffusion models have achieved remarkable progress, yet their use raises copyright and misuse concerns, prompting research into machine unlearning. However, extending multi-concept unlearning to large-scale scenarios remains difficult due to three challenges: (i) conflicting weight updates that hinder unlearning or degrade generation; (ii) imprecise mechanisms that cause collateral damage to similar content; and (iii) reliance on additional data or modules, creating scalability bottlenecks. To address these, we propose Scalable-Precise Concept Unlearning (ScaPre), a unified framework tailored for large-scale unlearning. ScaPre introduces a conflict-aware stable design, integrating spectral trace regularization and geometry alignment to stabilize optimization, suppress conflicts, and preserve global structure. Furthermore, an Informax Decoupler identifies concept-relevant parameters and adaptively reweights updates, strictly confining unlearning to the target subspace. ScaPre yields an eff...