[2603.28824] SNEAKDOOR: Stealthy Backdoor Attacks against Distribution Matching-based Dataset Condensation

[2603.28824] SNEAKDOOR: Stealthy Backdoor Attacks against Distribution Matching-based Dataset Condensation

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.28824: SNEAKDOOR: Stealthy Backdoor Attacks against Distribution Matching-based Dataset Condensation

Computer Science > Cryptography and Security arXiv:2603.28824 (cs) [Submitted on 29 Mar 2026] Title:SNEAKDOOR: Stealthy Backdoor Attacks against Distribution Matching-based Dataset Condensation Authors:He Yang, Dongyi Lv, Song Ma, Wei Xi, Jizhong Zhao View a PDF of the paper titled SNEAKDOOR: Stealthy Backdoor Attacks against Distribution Matching-based Dataset Condensation, by He Yang and 4 other authors View PDF HTML (experimental) Abstract:Dataset condensation aims to synthesize compact yet informative datasets that retain the training efficacy of full-scale data, offering substantial gains in efficiency. Recent studies reveal that the condensation process can be vulnerable to backdoor attacks, where malicious triggers are injected into the condensation dataset, manipulating model behavior during inference. While prior approaches have made progress in balancing attack success rate and clean test accuracy, they often fall short in preserving stealthiness, especially in concealing the visual artifacts of condensed data or the perturbations introduced during inference. To address this challenge, we introduce Sneakdoor, which enhances stealthiness without compromising attack effectiveness. Sneakdoor exploits the inherent vulnerability of class decision boundaries and incorporates a generative module that constructs input-aware triggers aligned with local feature geometry, thereby minimizing detectability. This joint design enables the attack to remain imperceptible to both ...

Originally published on April 01, 2026. Curated by AI News.

Related Articles

Machine Learning

ML/AI Engineer laid off from big tech, need your help!

I recently left a very toxic company that was taking a serious toll on my mental and physical health. I gave everything I had and it cost...

Reddit - ML Jobs · 1 min ·
Can Claude Opus 4.7 and Ensemble AI Models Finally Make Code Review Reliable?
Llms

Can Claude Opus 4.7 and Ensemble AI Models Finally Make Code Review Reliable?

Ensemble AI models like Claude Opus 4.7 transform code review reliability. Discover how multi-model approaches catch subtle bugs human re...

AI Tools & Products · 9 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime