[2602.17155] Powering Up Zeroth-Order Training via Subspace Gradient Orthogonalization
Summary
The paper introduces ZO-Muon, a novel zeroth-order optimization method that enhances convergence speed and accuracy in training large-scale models by utilizing subspace gradient orthogonalization.
Why It Matters
This research addresses the limitations of zeroth-order optimization, which is crucial for fine-tuning large models without backpropagation. By improving query efficiency and accuracy, it has significant implications for machine learning, particularly in resource-constrained environments.
Key Takeaways
- ZO optimization offers a gradient-free alternative to first-order methods, enhancing memory efficiency.
- The ZO-Muon method significantly reduces the number of queries needed for effective model fine-tuning.
- Improvements in accuracy and efficiency were demonstrated on large language models and vision transformers.
Computer Science > Machine Learning arXiv:2602.17155 (cs) [Submitted on 19 Feb 2026] Title:Powering Up Zeroth-Order Training via Subspace Gradient Orthogonalization Authors:Yicheng Lang, Changsheng Wang, Yihua Zhang, Mingyi Hong, Zheng Zhang, Wotao Yin, Sijia Liu View a PDF of the paper titled Powering Up Zeroth-Order Training via Subspace Gradient Orthogonalization, by Yicheng Lang and 6 other authors View PDF HTML (experimental) Abstract:Zeroth-order (ZO) optimization provides a gradient-free alternative to first-order (FO) methods by estimating gradients via finite differences of function evaluations, and has recently emerged as a memory-efficient paradigm for fine-tuning large-scale models by avoiding backpropagation. However, ZO optimization has a fundamental tension between accuracy and query efficiency. In this work, we show that ZO optimization can be substantially improved by unifying two complementary principles: (i) a projection-based subspace view that reduces gradient estimation variance by exploiting the intrinsic low-rank structure of model updates, and (ii) Muon-style spectral optimization that applies gradient orthogonalization to extract informative spectral structure from noisy ZO gradients. These findings form a unified framework of subspace gradient orthogonalization, which we instantiate in a new method, ZO-Muon, admitting a natural interpretation as a low-rank Muon optimizer in the ZO setting. Extensive experiments on large language models (LLMs) and...