[2510.04058] Unlearning in Diffusion models under Data Constraints: A Variational Inference Approach
About this article
Abstract page for arXiv paper 2510.04058: Unlearning in Diffusion models under Data Constraints: A Variational Inference Approach
Computer Science > Machine Learning arXiv:2510.04058 (cs) [Submitted on 5 Oct 2025 (v1), last revised 23 Mar 2026 (this version, v4)] Title:Unlearning in Diffusion models under Data Constraints: A Variational Inference Approach Authors:Subhodip Panda, Varun M S, Shreyans Jain, Sarthak Kumar Maharana, Prathosh A.P View a PDF of the paper titled Unlearning in Diffusion models under Data Constraints: A Variational Inference Approach, by Subhodip Panda and 3 other authors View PDF Abstract:For a responsible and safe deployment of diffusion models in various domains, regulating the generated outputs from these models is desirable because such models could generate undesired, violent, and obscene outputs. To tackle this problem, recent works use machine unlearning methodology to forget training data points containing these undesired features from pre-trained generative models. However, these methods proved to be ineffective in data-constrained settings where the whole training dataset is inaccessible. Thus, the principal objective of this work is to propose a machine unlearning methodology that can prevent the generation of outputs containing undesired features from a pre-trained diffusion model in such a data-constrained setting. Our proposed method, termed as Variational Diffusion Unlearning (VDU), is a computationally efficient method that only requires access to a subset of training data containing undesired features. Our approach is inspired by the variational inference fra...