[2504.18594] RaPA: Enhancing Transferable Targeted Attacks via Random Parameter Pruning

[2504.18594] RaPA: Enhancing Transferable Targeted Attacks via Random Parameter Pruning

arXiv - Machine Learning 4 min read Article

Summary

The paper presents RaPA, a novel approach to enhance transferable targeted attacks in machine learning by utilizing random parameter pruning, achieving improved attack success rates across different model architectures.

Why It Matters

As machine learning models become more prevalent, understanding and improving adversarial attacks is crucial for AI safety and security. RaPA addresses limitations in existing methods, potentially leading to more robust defenses against targeted attacks.

Key Takeaways

  • RaPA introduces random parameter pruning to enhance transferability of targeted attacks.
  • The method significantly improves attack success rates, especially when transitioning between different model architectures.
  • RaPA is training-free and can be integrated into existing attack frameworks, making it accessible for further research.

Computer Science > Machine Learning arXiv:2504.18594 (cs) [Submitted on 24 Apr 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:RaPA: Enhancing Transferable Targeted Attacks via Random Parameter Pruning Authors:Tongrui Su, Qingbin Li, Shengyu Zhu, Wei Chen, Xueqi Cheng View a PDF of the paper titled RaPA: Enhancing Transferable Targeted Attacks via Random Parameter Pruning, by Tongrui Su and Qingbin Li and Shengyu Zhu and Wei Chen and Xueqi Cheng View PDF HTML (experimental) Abstract:Compared to untargeted attacks, targeted transfer-based attack is still suffering from much lower Attack Success Rates (ASRs), although significant improvements have been achieved by kinds of methods, such as diversifying input, stabilizing the gradient, and re-training surrogate models. In this paper, we find that adversarial examples generated by existing methods rely heavily on a small subset of surrogate model parameters, which in turn limits their transferability to unseen target models. Inspired by this, we propose the Random Parameter Pruning Attack (RaPA), which introduces parameter-level randomization during the attack process. At each optimization step, RaPA randomly prunes model parameters to generate diverse yet semantically consistent surrogate this http URL show this parameter-level randomization is equivalent to adding an importance-equalization regularizer, thereby alleviating the over-reliance issue. Extensive experiments across both CNN and Transformer architectu...

Related Articles

Machine Learning

I tried building a memory-first AI… and ended up discovering smaller models can beat larger ones

Dataset Model Acc F1 Δ vs Log Δ vs Static Avg Params Peak Params Steps Infer ms Size Banking77-20 Logistic TF-IDF 92.37% 0.9230 +0.00pp +...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] Howcome Muon is only being used for Transformers?

Muon has quickly been adopted in LLM training, yet we don't see it being talked about in other contexts. Searches for Muon on ConvNets tu...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] Run Karpathy's Autoresearch for $0.44 instead of $24 — Open-source parallel evolution pipeline on SageMaker Spot

TL;DR: I built an open-source pipeline that runs Karpathy's autoresearch on SageMaker Spot instances — 25 autonomous ML experiments for $...

Reddit - Machine Learning · 1 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime