[2603.19685] A Subgoal-driven Framework for Improving Long-Horizon LLM Agents
About this article
Abstract page for arXiv paper 2603.19685: A Subgoal-driven Framework for Improving Long-Horizon LLM Agents
Computer Science > Artificial Intelligence arXiv:2603.19685 (cs) [Submitted on 20 Mar 2026] Title:A Subgoal-driven Framework for Improving Long-Horizon LLM Agents Authors:Taiyi Wang, Sian Gooding, Florian Hartmann, Oriana Riva, Edward Grefenstette View a PDF of the paper titled A Subgoal-driven Framework for Improving Long-Horizon LLM Agents, by Taiyi Wang and 4 other authors View PDF HTML (experimental) Abstract:Large language model (LLM)-based agents have emerged as powerful autonomous controllers for digital environments, including mobile interfaces, operating systems, and web browsers. Web navigation, for example, requires handling dynamic content and long sequences of actions, making it particularly challenging. Existing LLM-based agents struggle with long-horizon planning in two main ways. During online execution, they often lose track as new information arrives, lacking a clear and adaptive path toward the final goal. This issue is further exacerbated during reinforcement learning (RL) fine-tuning, where sparse and delayed rewards make it difficult for agents to identify which actions lead to success, preventing them from maintaining coherent reasoning over extended tasks. To address these challenges, we propose two contributions. First, we introduce an agent framework that leverages proprietary models for online planning through subgoal decomposition. Second, we present MiRA (Milestoning your Reinforcement Learning Enhanced Agent), an RL training framework that use...