[2605.05224] Channel-Level Semantic Perturbations: Unlearnable Examples for Diverse Training Paradigms

[2605.05224] Channel-Level Semantic Perturbations: Unlearnable Examples for Diverse Training Paradigms

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2605.05224: Channel-Level Semantic Perturbations: Unlearnable Examples for Diverse Training Paradigms

Computer Science > Machine Learning arXiv:2605.05224 (cs) [Submitted on 18 Apr 2026] Title:Channel-Level Semantic Perturbations: Unlearnable Examples for Diverse Training Paradigms Authors:Bo Wang, Jia Ni, Mengnan Zhao, Zhan Qin, Kui Ren View a PDF of the paper titled Channel-Level Semantic Perturbations: Unlearnable Examples for Diverse Training Paradigms, by Bo Wang and 4 other authors View PDF HTML (experimental) Abstract:The unauthorized use of personal data in model training has emerged as a growing privacy threat. Unlearnable examples (UEs) address this issue by embedding imperceptible perturbations into benign examples to obstruct feature learning. However, existing studies mainly evaluate UEs under from-scratch training settings, leaving their behavior under the widely adopted pretraining-finetuning (PF) paradigm largely unexplored. In this work, we provide the first systematic investigation of unlearnable examples across diverse training paradigms. Our analysis reveals that loading and freezing pretrained weights significantly weakens the effectiveness of existing UEs methods. We further explain these findings through semantic filtering: while UEs tend to induce models to overfit non-semantic noise, thereby weakening their semantic extraction capabilities, under the PF paradigm, frozen shallow layers preserve data semantics, effectively filtering out distracting information like unlearnable noise. Guided by these insights, we propose a hierarchical deception strat...

Originally published on May 08, 2026. Curated by AI News.

Related Articles

Machine Learning

What to expect from AlphaZero's value predictions [D]

An AlphaZero agent has learnt to predict the value of a game state by training on data generated by self-play by the model and a series o...

Reddit - Machine Learning · 1 min ·
Machine Learning

Open Source Projects related to CNNs to Contribute To? [D]

Around a decade a go I was tinkering a lot with CNNs for real time event detection. I enjoyed that a lot and always wanted to get back in...

Reddit - Machine Learning · 1 min ·
I Work in Hollywood. Everyone Who Used to Make TV Is Now Secretly Training AI | WIRED
Machine Learning

I Work in Hollywood. Everyone Who Used to Make TV Is Now Secretly Training AI | WIRED

For screenwriters like me—and job seekers all over—AI gig work is the new waiting tables. In eight months, I’ve done 20 of these soul-cru...

Wired - AI · 27 min ·
Machine Learning

Are Enterprises Using AI in the Wrong Places?

Most enterprise AI discussions still revolve around one question: But I’m starting to think that may be the wrong question entirely. The ...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime