[2602.21534] ARLArena: A Unified Framework for Stable Agentic Reinforcement Learning
Summary
The paper presents ARLArena, a framework designed to enhance stability in agentic reinforcement learning (ARL) by providing a systematic analysis and a stable training method called SAMPO.
Why It Matters
As agentic reinforcement learning gains traction for solving complex tasks, stability issues hinder its scalability and effectiveness. ARLArena addresses these challenges, offering a structured approach to improve training reliability and performance, which is crucial for advancing AI applications.
Key Takeaways
- ARLArena provides a standardized testbed for evaluating ARL stability.
- The paper decomposes policy gradient into four core design dimensions for analysis.
- SAMPO, the proposed optimization method, mitigates instability in ARL.
- Empirical results show SAMPO achieves stable training across diverse tasks.
- The study offers practical guidance for developing robust LLM-based agent training pipelines.
Computer Science > Artificial Intelligence arXiv:2602.21534 (cs) [Submitted on 25 Feb 2026] Title:ARLArena: A Unified Framework for Stable Agentic Reinforcement Learning Authors:Xiaoxuan Wang, Han Zhang, Haixin Wang, Yidan Shi, Ruoyan Li, Kaiqiao Han, Chenyi Tong, Haoran Deng, Renliang Sun, Alexander Taylor, Yanqiao Zhu, Jason Cong, Yizhou Sun, Wei Wang View a PDF of the paper titled ARLArena: A Unified Framework for Stable Agentic Reinforcement Learning, by Xiaoxuan Wang and 13 other authors View PDF Abstract:Agentic reinforcement learning (ARL) has rapidly gained attention as a promising paradigm for training agents to solve complex, multi-step interactive tasks. Despite encouraging early results, ARL remains highly unstable, often leading to training collapse. This instability limits scalability to larger environments and longer interaction horizons, and constrains systematic exploration of algorithmic design choices. In this paper, we first propose ARLArena, a stable training recipe and systematic analysis framework that examines training stability in a controlled and reproducible setting. ARLArena first constructs a clean and standardized testbed. Then, we decompose policy gradient into four core design dimensions and assess the performance and stability of each dimension. Through this fine-grained analysis, we distill a unified perspective on ARL and propose SAMPO, a stable agentic policy optimization method designed to mitigate the dominant sources of instability in...