[2603.01501] GAC: Stabilizing Asynchronous RL Training for LLMs via Gradient Alignment Control
About this article
Abstract page for arXiv paper 2603.01501: GAC: Stabilizing Asynchronous RL Training for LLMs via Gradient Alignment Control
Computer Science > Machine Learning arXiv:2603.01501 (cs) [Submitted on 2 Mar 2026] Title:GAC: Stabilizing Asynchronous RL Training for LLMs via Gradient Alignment Control Authors:Haofeng Xu, Junwei Su, Yukun Tian, Lansong Diao, Zhengping Qian, Chuan Wu View a PDF of the paper titled GAC: Stabilizing Asynchronous RL Training for LLMs via Gradient Alignment Control, by Haofeng Xu and 5 other authors View PDF HTML (experimental) Abstract:Asynchronous execution is essential for scaling reinforcement learning (RL) to modern large model workloads, including large language models and AI agents, but it can fundamentally alter RL optimization behavior. While prior work on asynchronous RL focuses on training throughput and distributional correction, we show that naively applying asynchrony to policy-gradient updates can induce qualitatively different training dynamics and lead to severe training instability. Through systematic empirical and theoretical analysis, we identify a key signature of this instability: asynchronous training exhibits persistently high cosine similarity between consecutive policy gradients, in contrast to the near-orthogonal updates observed under synchronized training. This stale-aligned gradient effect amplifies correlated updates and increases the risk of overshooting and divergence. Motivated by this observation, we propose GRADIENT ALIGNMENT CONTROL(GAC), a simple dynamics-aware stabilization method that regulates asynchronous RL progress along stale-ali...