[2602.23117] Devling into Adversarial Transferability on Image Classification: Review, Benchmark, and Evaluation
Summary
This article reviews adversarial transferability in image classification, proposing a standardized framework for evaluating transfer-based attacks and categorizing existing approaches.
Why It Matters
Adversarial transferability poses significant security risks in AI applications. By establishing a standardized evaluation framework, this research aims to improve the reliability of assessments in the field, which is crucial for developing robust AI systems and enhancing security measures against adversarial attacks.
Key Takeaways
- Adversarial transferability allows attacks on models without direct access.
- The article identifies a lack of standardized evaluation criteria in existing research.
- A new framework for benchmarking transfer-based attacks is proposed.
- Common strategies to enhance adversarial transferability are discussed.
- The review highlights issues leading to unfair comparisons in evaluations.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.23117 (cs) [Submitted on 26 Feb 2026] Title:Devling into Adversarial Transferability on Image Classification: Review, Benchmark, and Evaluation Authors:Xiaosen Wang, Zhijin Ge, Bohan Liu, Zheng Fang, Fengfan Zhou, Ruixuan Zhang, Shaokang Wang, Yuyang Luo View a PDF of the paper titled Devling into Adversarial Transferability on Image Classification: Review, Benchmark, and Evaluation, by Xiaosen Wang and 7 other authors View PDF HTML (experimental) Abstract:Adversarial transferability refers to the capacity of adversarial examples generated on the surrogate model to deceive alternate, unexposed victim models. This property eliminates the need for direct access to the victim model during an attack, thereby raising considerable security concerns in practical applications and attracting substantial research attention recently. In this work, we discern a lack of a standardized framework and criteria for evaluating transfer-based attacks, leading to potentially biased assessments of existing approaches. To rectify this gap, we have conducted an exhaustive review of hundreds of related works, organizing various transfer-based attacks into six distinct categories. Subsequently, we propose a comprehensive framework designed to serve as a benchmark for evaluating these attacks. In addition, we delineate common strategies that enhance adversarial transferability and highlight prevalent issues that could lead to unf...