[2506.06027] Sample-Specific Noise Injection For Diffusion-Based Adversarial Purification

[2506.06027] Sample-Specific Noise Injection For Diffusion-Based Adversarial Purification

arXiv - Machine Learning 4 min read Article

Summary

This paper introduces Sample-specific Score-aware Noise Injection (SSNI), a novel framework for diffusion-based adversarial purification that optimizes noise levels for individual samples, enhancing accuracy and robustness in image classification tasks.

Why It Matters

As adversarial attacks on machine learning models become more prevalent, improving the robustness of these models is crucial. This research highlights the importance of tailoring noise levels to specific samples, potentially leading to more effective defenses against adversarial perturbations in computer vision applications.

Key Takeaways

  • SSNI framework adjusts noise levels based on individual sample characteristics.
  • Empirical results show significant improvements in model accuracy and robustness.
  • The approach emphasizes the need for sample-specific strategies in adversarial purification.
  • Utilizes a pre-trained score network to evaluate data point deviations.
  • Demonstrates effectiveness on benchmark datasets like CIFAR-10 and ImageNet-1K.

Computer Science > Computer Vision and Pattern Recognition arXiv:2506.06027 (cs) [Submitted on 6 Jun 2025 (v1), last revised 12 Feb 2026 (this version, v2)] Title:Sample-Specific Noise Injection For Diffusion-Based Adversarial Purification Authors:Yuhao Sun, Jiacheng Zhang, Zesheng Ye, Chaowei Xiao, Feng Liu View a PDF of the paper titled Sample-Specific Noise Injection For Diffusion-Based Adversarial Purification, by Yuhao Sun and 4 other authors View PDF HTML (experimental) Abstract:Diffusion-based purification (DBP) methods aim to remove adversarial noise from the input sample by first injecting Gaussian noise through a forward diffusion process, and then recovering the clean example through a reverse generative process. In the above process, how much Gaussian noise is injected to the input sample is key to the success of DBP methods, which is controlled by a constant noise level $t^*$ for all samples in existing methods. In this paper, we discover that an optimal $t^*$ for each sample indeed could be different. Intuitively, the cleaner a sample is, the less the noise it should be injected, and vice versa. Motivated by this finding, we propose a new framework, called Sample-specific Score-aware Noise Injection (SSNI). Specifically, SSNI uses a pre-trained score network to estimate how much a data point deviates from the clean data distribution (i.e., score norms). Then, based on the magnitude of score norms, SSNI applies a reweighting function to adaptively adjust $t^*$...

Related Articles

Generative Ai

Inside OpenAI's decision to abandon Sora AI video app

submitted by /u/LinkedInNews [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Accelerating science with AI and simulations
Machine Learning

Accelerating science with AI and simulations

MIT Professor Rafael Gómez-Bombarelli discusses the transformative potential of AI in scientific research, emphasizing its role in materi...

AI News - General · 10 min ·
[2603.12057] Coarse-Guided Visual Generation via Weighted h-Transform Sampling
Machine Learning

[2603.12057] Coarse-Guided Visual Generation via Weighted h-Transform Sampling

Abstract page for arXiv paper 2603.12057: Coarse-Guided Visual Generation via Weighted h-Transform Sampling

arXiv - AI · 4 min ·
[2603.07455] Image Generation Models: A Technical History
Machine Learning

[2603.07455] Image Generation Models: A Technical History

Abstract page for arXiv paper 2603.07455: Image Generation Models: A Technical History

arXiv - AI · 3 min ·
More in Generative Ai: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime