[2604.01499] Matching Accuracy, Different Geometry: Evolution Strategies vs GRPO in LLM Post-Training
About this article
Abstract page for arXiv paper 2604.01499: Matching Accuracy, Different Geometry: Evolution Strategies vs GRPO in LLM Post-Training
Computer Science > Machine Learning arXiv:2604.01499 (cs) [Submitted on 2 Apr 2026] Title:Matching Accuracy, Different Geometry: Evolution Strategies vs GRPO in LLM Post-Training Authors:William Hoy, Binxu Wang, Xu Pan View a PDF of the paper titled Matching Accuracy, Different Geometry: Evolution Strategies vs GRPO in LLM Post-Training, by William Hoy and 2 other authors View PDF HTML (experimental) Abstract:Evolution Strategies (ES) have emerged as a scalable gradient-free alternative to reinforcement learning based LLM fine-tuning, but it remains unclear whether comparable task performance implies comparable solutions in parameter space. We compare ES and Group Relative Policy Optimization (GRPO) across four tasks in both single-task and sequential continual-learning settings. ES matches or exceeds GRPO in single-task accuracy and remains competitive sequentially when its iteration budget is controlled. Despite this similarity in task performance, the two methods produce markedly different model updates: ES makes much larger changes and induces broader off-task KL drift, whereas GRPO makes smaller, more localized updates. Strikingly, the ES and GRPO solutions are linearly connected with no loss barrier, even though their update directions are nearly orthogonal. We develop an analytical theory of ES that explains all these phenomena within a unified framework, showing how ES can accumulate large off-task movement on weakly informative directions while still making enough...