[2504.18594] RaPA: Enhancing Transferable Targeted Attacks via Random Parameter Pruning
Summary
The paper presents RaPA, a novel approach to enhance transferable targeted attacks in machine learning by utilizing random parameter pruning, achieving improved attack success rates across different model architectures.
Why It Matters
As machine learning models become more prevalent, understanding and improving adversarial attacks is crucial for AI safety and security. RaPA addresses limitations in existing methods, potentially leading to more robust defenses against targeted attacks.
Key Takeaways
- RaPA introduces random parameter pruning to enhance transferability of targeted attacks.
- The method significantly improves attack success rates, especially when transitioning between different model architectures.
- RaPA is training-free and can be integrated into existing attack frameworks, making it accessible for further research.
Computer Science > Machine Learning arXiv:2504.18594 (cs) [Submitted on 24 Apr 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:RaPA: Enhancing Transferable Targeted Attacks via Random Parameter Pruning Authors:Tongrui Su, Qingbin Li, Shengyu Zhu, Wei Chen, Xueqi Cheng View a PDF of the paper titled RaPA: Enhancing Transferable Targeted Attacks via Random Parameter Pruning, by Tongrui Su and Qingbin Li and Shengyu Zhu and Wei Chen and Xueqi Cheng View PDF HTML (experimental) Abstract:Compared to untargeted attacks, targeted transfer-based attack is still suffering from much lower Attack Success Rates (ASRs), although significant improvements have been achieved by kinds of methods, such as diversifying input, stabilizing the gradient, and re-training surrogate models. In this paper, we find that adversarial examples generated by existing methods rely heavily on a small subset of surrogate model parameters, which in turn limits their transferability to unseen target models. Inspired by this, we propose the Random Parameter Pruning Attack (RaPA), which introduces parameter-level randomization during the attack process. At each optimization step, RaPA randomly prunes model parameters to generate diverse yet semantically consistent surrogate this http URL show this parameter-level randomization is equivalent to adding an importance-equalization regularizer, thereby alleviating the over-reliance issue. Extensive experiments across both CNN and Transformer architectu...