[2602.19096] The Power of Decaying Steps: Enhancing Attack Stability and Transferability for Sign-based Optimizers

[2602.19096] The Power of Decaying Steps: Enhancing Attack Stability and Transferability for Sign-based Optimizers

arXiv - Machine Learning 4 min read Article

Summary

This paper explores the limitations of sign-based optimizers in generating adversarial examples and proposes a new method using Monotonically Decreasing Coordinate-wise Step-sizes to enhance attack stability and transferability.

Why It Matters

The findings address critical issues in adversarial machine learning, particularly the instability and non-convergence of existing sign-based optimizers. By improving the transferability of attacks, this research can significantly impact the robustness of machine learning models against adversarial attacks, which is crucial for AI safety.

Key Takeaways

  • Sign-based optimizers face challenges with non-convergence and instability.
  • The proposed MDCS method enhances attack stability and transferability.
  • Theoretical guarantees show optimal convergence rates for the new algorithms.
  • Extensive experiments validate the effectiveness of the proposed approach.
  • Improving adversarial robustness is essential for AI applications.

Computer Science > Machine Learning arXiv:2602.19096 (cs) [Submitted on 22 Feb 2026] Title:The Power of Decaying Steps: Enhancing Attack Stability and Transferability for Sign-based Optimizers Authors:Wei Tao, Yang Dai, Jincai Huang, Qing Tao View a PDF of the paper titled The Power of Decaying Steps: Enhancing Attack Stability and Transferability for Sign-based Optimizers, by Wei Tao and 3 other authors View PDF Abstract:Crafting adversarial examples can be formulated as an optimization problem. While sign-based optimizers such as I-FGSM and MI-FGSM have become the de facto standard for the induced optimization problems, there still exist several unsolved problems in theoretical grounding and practical reliability especially in non-convergence and instability, which inevitably influences their transferability. Contrary to the expectation, we observe that the attack success rate may degrade sharply when more number of iterations are conducted. In this paper, we address these issues from an optimization perspective. By reformulating the sign-based optimizer as a specific coordinate-wise gradient descent, we argue that one cause for non-convergence and instability is their non-decaying step-size scheduling. Based upon this viewpoint, we propose a series of new attack algorithms that enforce Monotonically Decreasing Coordinate-wise Step-sizes (MDCS) within sign-based optimizers. Typically, we further provide theoretical guarantees proving that MDCS-MI attains an optimal conve...

Related Articles

Machine Learning

[D] ICML Reviewer Acknowledgement

Hi, I'm a little confused about ICML discussion period Does the period for reviewer acknowledging responses have already ended? One of th...

Reddit - Machine Learning · 1 min ·
Llms

Claude Opus 4.6 API at 40% below Anthropic pricing – try free before you pay anything

Hey everyone I've set up a self-hosted API gateway using [New-API](QuantumNous/new-ap) to manage and distribute Claude Opus 4.6 access ac...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] ICML reviewer making up false claim in acknowledgement, what to do?

In a rebuttal acknowledgement we received, the reviewer made up a claim that our method performs worse than baselines with some hyperpara...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime