[2603.28101] Heddle: A Distributed Orchestration System for Agentic RL Rollout
About this article
Abstract page for arXiv paper 2603.28101: Heddle: A Distributed Orchestration System for Agentic RL Rollout
Computer Science > Machine Learning arXiv:2603.28101 (cs) [Submitted on 30 Mar 2026] Title:Heddle: A Distributed Orchestration System for Agentic RL Rollout Authors:Zili Zhang, Yinmin Zhong, Chengxu Yang, Chao Jin, Bingyang Wu, Xinming Wei, Yuliang Liu, Xin Jin View a PDF of the paper titled Heddle: A Distributed Orchestration System for Agentic RL Rollout, by Zili Zhang and 7 other authors View PDF HTML (experimental) Abstract:Agentic Reinforcement Learning (RL) enables LLMs to solve complex tasks by alternating between a data-collection rollout phase and a policy training phase. During rollout, the agent generates trajectories, i.e., multi-step interactions between LLMs and external tools. Yet, frequent tool calls induce long-tailed trajectory generation that bottlenecks rollouts. This stems from step-centric designs that ignore trajectory context, triggering three system problems for long-tail trajectory generation: queueing delays, interference overhead, and inflated per-token time. We propose Heddle, a trajectory-centric system to optimize the when, where, and how of agentic rollout execution. Heddle integrates three core mechanisms: trajectory-level scheduling using runtime prediction and progressive priority to minimize cumulative queueing; trajectory-aware placement via presorted dynamic programming and opportunistic migration during idle tool call intervals to minimize interference; and trajectory-adaptive resource manager that dynamically tunes model parallelism ...