[2603.25144] FD$^2$: A Dedicated Framework for Fine-Grained Dataset Distillation
About this article
Abstract page for arXiv paper 2603.25144: FD$^2$: A Dedicated Framework for Fine-Grained Dataset Distillation
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.25144 (cs) [Submitted on 26 Mar 2026] Title:FD$^2$: A Dedicated Framework for Fine-Grained Dataset Distillation Authors:Hongxu Ma, Guang Li, Shijie Wang, Dongzhan Zhou, Baoli Sun, Takahiro Ogawa, Miki Haseyama, Zhihui Wang View a PDF of the paper titled FD$^2$: A Dedicated Framework for Fine-Grained Dataset Distillation, by Hongxu Ma and 7 other authors View PDF HTML (experimental) Abstract:Dataset distillation (DD) compresses a large training set into a small synthetic set, reducing storage and training cost, and has shown strong results on general benchmarks. Decoupled DD further improves efficiency by splitting the pipeline into pretraining, sample distillation, and soft-label generation. However, existing decoupled methods largely rely on coarse class-label supervision and optimize samples within each class in a nearly identical manner. On fine-grained datasets, this often yields distilled samples that (i) retain large intra-class variation with subtle inter-class differences and (ii) become overly similar within the same class, limiting localized discriminative cues and hurting recognition. To solve the above-mentioned problems, we propose FD$^{2}$, a dedicated framework for Fine-grained Dataset Distillation. FD$^{2}$ localizes discriminative regions and constructs fine-grained representations for distillation. During pretraining, counterfactual attention learning aggregates discriminative represent...