[2510.17421] Diffusion Models as Dataset Distillation Priors
About this article
Abstract page for arXiv paper 2510.17421: Diffusion Models as Dataset Distillation Priors
Computer Science > Machine Learning arXiv:2510.17421 (cs) [Submitted on 20 Oct 2025 (v1), last revised 3 Apr 2026 (this version, v2)] Title:Diffusion Models as Dataset Distillation Priors Authors:Duo Su, Huyu Wu, Huanran Chen, Yiming Shi, Yuzhu Wang, Xi Ye, Jun Zhu View a PDF of the paper titled Diffusion Models as Dataset Distillation Priors, by Duo Su and 6 other authors View PDF HTML (experimental) Abstract:Dataset distillation aims to synthesize compact yet informative datasets from large ones. A significant challenge in this field is achieving a trifecta of diversity, generalization, and representativeness in a single distilled dataset. Although recent generative dataset distillation methods adopt powerful diffusion models as their foundation models, the inherent representativeness prior in diffusion models is overlooked. Consequently, these approaches often necessitate the integration of external constraints to enhance data quality. To address this, we propose Diffusion As Priors (DAP), which formalizes representativeness by quantifying the similarity between synthetic and real data in feature space using a Mercer kernel. We then introduce this prior as guidance to steer the reverse diffusion process, enhancing the representativeness of distilled samples without any retraining. Extensive experiments on large-scale datasets, such as ImageNet-1K and its subsets, demonstrate that DAP outperforms state-of-the-art methods in generating high-fidelity datasets while achievi...