[2605.05224] Channel-Level Semantic Perturbations: Unlearnable Examples for Diverse Training Paradigms
About this article
Abstract page for arXiv paper 2605.05224: Channel-Level Semantic Perturbations: Unlearnable Examples for Diverse Training Paradigms
Computer Science > Machine Learning arXiv:2605.05224 (cs) [Submitted on 18 Apr 2026] Title:Channel-Level Semantic Perturbations: Unlearnable Examples for Diverse Training Paradigms Authors:Bo Wang, Jia Ni, Mengnan Zhao, Zhan Qin, Kui Ren View a PDF of the paper titled Channel-Level Semantic Perturbations: Unlearnable Examples for Diverse Training Paradigms, by Bo Wang and 4 other authors View PDF HTML (experimental) Abstract:The unauthorized use of personal data in model training has emerged as a growing privacy threat. Unlearnable examples (UEs) address this issue by embedding imperceptible perturbations into benign examples to obstruct feature learning. However, existing studies mainly evaluate UEs under from-scratch training settings, leaving their behavior under the widely adopted pretraining-finetuning (PF) paradigm largely unexplored. In this work, we provide the first systematic investigation of unlearnable examples across diverse training paradigms. Our analysis reveals that loading and freezing pretrained weights significantly weakens the effectiveness of existing UEs methods. We further explain these findings through semantic filtering: while UEs tend to induce models to overfit non-semantic noise, thereby weakening their semantic extraction capabilities, under the PF paradigm, frozen shallow layers preserve data semantics, effectively filtering out distracting information like unlearnable noise. Guided by these insights, we propose a hierarchical deception strat...