[2603.21972] Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe
About this article
Abstract page for arXiv paper 2603.21972: Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe
Computer Science > Machine Learning arXiv:2603.21972 (cs) [Submitted on 23 Mar 2026] Title:Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe Authors:Xixi Wu, Qianguo Sun, Ruiyang Zhang, Chao Song, Junlong Wu, Yiyan Qi, Hong Cheng View a PDF of the paper titled Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe, by Xixi Wu and Qianguo Sun and Ruiyang Zhang and Chao Song and Junlong Wu and Yiyan Qi and Hong Cheng View PDF HTML (experimental) Abstract:Reinforcement Learning (RL) is essential for evolving Large Language Models (LLMs) into autonomous agents capable of long-horizon planning, yet a practical recipe for scaling RL in complex, multi-turn environments remains elusive. This paper presents a systematic empirical study using TravelPlanner, a challenging testbed requiring tool orchestration to satisfy multifaceted constraints. We decompose the agentic RL design space along 5 axes: reward shaping, model scaling, data composition, algorithm selection, and environmental stability. Our controlled experiments yield 7 key takeaways, e.g., (1) reward and algorithm choices are scale-dependent as smaller models benefit from staged rewards and enhanced exploration, whereas larger models converge efficiently with simpler dense rewards, (2) ~ 1K training samples with a balanced difficulty mixture mark a sweet spot for both in-domain and out-of-domain performance, and (3) environmental stability...