[2603.28092] InkDrop: Invisible Backdoor Attacks Against Dataset Condensation
About this article
Abstract page for arXiv paper 2603.28092: InkDrop: Invisible Backdoor Attacks Against Dataset Condensation
Computer Science > Machine Learning arXiv:2603.28092 (cs) [Submitted on 30 Mar 2026] Title:InkDrop: Invisible Backdoor Attacks Against Dataset Condensation Authors:He Yang, Dongyi Lv, Song Ma, Wei Xi, Zhi Wang, Hanlin Gu, Yajie Wang View a PDF of the paper titled InkDrop: Invisible Backdoor Attacks Against Dataset Condensation, by He Yang and 6 other authors View PDF HTML (experimental) Abstract:Dataset Condensation (DC) is a data-efficient learning paradigm that synthesizes small yet informative datasets, enabling models to match the performance of full-data training. However, recent work exposes a critical vulnerability of DC to backdoor attacks, where malicious patterns (\textit{e.g.}, triggers) are implanted into the condensation dataset, inducing targeted misclassification on specific inputs. Existing attacks always prioritize attack effectiveness and model utility, overlooking the crucial dimension of stealthiness. To bridge this gap, we propose InkDrop, which enhances the imperceptibility of malicious manipulation without degrading attack effectiveness and model utility. InkDrop leverages the inherent uncertainty near model decision boundaries, where minor input perturbations can induce semantic shifts, to construct a stealthy and effective backdoor attack. Specifically, InkDrop first selects candidate samples near the target decision boundary that exhibit latent semantic affinity to the target class. It then learns instance-dependent perturbations constrained by pe...