[2505.24275] GradPower: Powering Gradients for Faster Language Model Pre-Training
About this article
Abstract page for arXiv paper 2505.24275: GradPower: Powering Gradients for Faster Language Model Pre-Training
Computer Science > Machine Learning arXiv:2505.24275 (cs) [Submitted on 30 May 2025 (v1), last revised 2 Apr 2026 (this version, v2)] Title:GradPower: Powering Gradients for Faster Language Model Pre-Training Authors:Jinbo Wang, Mingze Wang, Jiaqi Zhang, Wei Wang, Peng Pei, Xunliang Cai, Weinan E, Lei Wu View a PDF of the paper titled GradPower: Powering Gradients for Faster Language Model Pre-Training, by Jinbo Wang and 7 other authors View PDF HTML (experimental) Abstract:We propose GradPower, a lightweight gradient-transformation technique for accelerating language model pre-training. Given a gradient vector $g=(g_i)_i$, GradPower first applies the elementwise sign-power transformation: $\varphi_p(g)=({\rm sign}(g_i)|g_i|^p)_{i}$ for a fixed $p>0$, and then feeds the transformed gradient into a base optimizer. Notably, GradPower requires only a single-line code change and no modifications to the base optimizer's internal logic, including the hyperparameters. When applied to Adam (termed AdamPower), GradPower consistently achieves lower terminal loss across diverse architectures (LLaMA, Qwen2MoE), parameter scales (66M to 2B), datasets (C4, OpenWebText), and learning-rate schedules (cosine, warmup-stable-decay). The most pronounced gains are observed when training modern mixture-of-experts models with warmup-stable-decay schedules. GradPower also integrates seamlessly with other state-of-the-art optimizers, such as Muon, yielding further improvements. Finally, we provide...