[2509.13007] ReTrack: Data Unlearning in Diffusion Models through Redirecting the Denoising Trajectory

[2509.13007] ReTrack: Data Unlearning in Diffusion Models through Redirecting the Denoising Trajectory

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2509.13007: ReTrack: Data Unlearning in Diffusion Models through Redirecting the Denoising Trajectory

Computer Science > Machine Learning arXiv:2509.13007 (cs) [Submitted on 16 Sep 2025 (v1), last revised 29 Mar 2026 (this version, v2)] Title:ReTrack: Data Unlearning in Diffusion Models through Redirecting the Denoising Trajectory Authors:Qitan Shi, Cheng Jin, Jiawei Zhang, Yuantao Gu View a PDF of the paper titled ReTrack: Data Unlearning in Diffusion Models through Redirecting the Denoising Trajectory, by Qitan Shi and 3 other authors View PDF HTML (experimental) Abstract:Diffusion models excel at generating high-quality, diverse images but suffer from training data memorization, raising critical privacy and safety concerns. Data unlearning has emerged to mitigate this issue by removing the influence of specific data without retraining from scratch. We propose ReTrack, a fast and effective data unlearning method for diffusion models. ReTrack employs importance sampling to construct a more efficient fine-tuning loss, which we approximate by retaining only dominant terms. This yields an interpretable objective that redirects denoising trajectories toward the $k$-nearest neighbors, enabling efficient unlearning while preserving generative quality. Experiments on MNIST T-Shirt, CelebA-HQ, CIFAR-10, and Stable Diffusion show that ReTrack achieves state-of-the-art performance, striking the best trade-off between unlearning strength and generation quality preservation. Comments: Subjects: Machine Learning (cs.LG) Cite as: arXiv:2509.13007 [cs.LG]   (or arXiv:2509.13007v2 [cs.LG...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

Machine Learning

Is it actually possible to build a model-agnostic persistent text layer that keeps AI behavior stable?

Is it actually possible to define a persistent, model-agnostic text-based layer (loaded with the model each time) that keeps an AI system...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Are gamers being used as free labeling labor? The rise of "Simulators" that look like AI training grounds [D]

Hey everyone, I’m an AI news curator and editor currently working on a piece about a weird trend I’ve been spotting: technical simulators...

Reddit - Machine Learning · 1 min ·
Machine Learning

Coherence Without Convergence: A New Protocol for Multi-Agent AI

Opening For the past year, most progress in multi-agent AI has followed a familiar pattern: Add more agents. Add more coordination. Watch...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Week 6 AIPass update - answering the top questions from last post (file conflicts, remote models, scale)

Followup to last post with answers to the top questions from the comments. Appreciate everyone who jumped in. The most common one by a mi...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime