[2603.00975] Forgetting is Competition: Rethinking Unlearning as Representation Interference in Diffusion Models
About this article
Abstract page for arXiv paper 2603.00975: Forgetting is Competition: Rethinking Unlearning as Representation Interference in Diffusion Models
Computer Science > Machine Learning arXiv:2603.00975 (cs) [Submitted on 1 Mar 2026] Title:Forgetting is Competition: Rethinking Unlearning as Representation Interference in Diffusion Models Authors:Ashutosh Ranjan, Vivek Srivastava, Shirish Karande, Murari Mandal View a PDF of the paper titled Forgetting is Competition: Rethinking Unlearning as Representation Interference in Diffusion Models, by Ashutosh Ranjan and 3 other authors View PDF Abstract:Unlearning in text-to-image diffusion models often leads to uneven concept removal and unintended forgetting of unrelated capabilities. This complicates tasks such as copyright compliance, protected data mitigation, artist opt-outs, and policy-driven content updates. As models grow larger and adopt more diverse architectures, achieving precise and selective unlearning while preserving generative quality becomes increasingly challenging. We introduce SurgUn (pronounced as Surgeon), a surgical unlearning method that applies targeted weight-space updates to remove specific visual concepts in text-conditioned diffusion models. Our approach is motivated by retroactive interference theory, which holds that newly acquired memories can overwrite, suppress, or impede access to prior ones by competing for shared representational pathways. We adapt this principle to diffusion models by inducing retroactive concept interference, enabling focused destabilization of only the target concept while preserving unrelated capabilities through a nov...