[2602.01664] FlowSteer: Interactive Agentic Workflow Orchestration via End-to-End Reinforcement Learning
Summary
FlowSteer introduces an end-to-end reinforcement learning framework for automating workflow orchestration, addressing challenges like manual costs and sparse rewards.
Why It Matters
This research is significant as it tackles critical issues in workflow orchestration, enhancing efficiency and adaptability in AI applications. By leveraging reinforcement learning, FlowSteer offers a novel approach that could streamline complex processes across various domains, making it relevant for AI practitioners and researchers.
Key Takeaways
- FlowSteer automates workflow orchestration using reinforcement learning.
- It addresses challenges such as high manual costs and reliance on specific operators.
- The framework supports diverse operator libraries and interchangeable LLM backends.
- Experimental results show significant performance improvements over existing methods.
- CWRPO introduces diversity-constrained rewards to stabilize learning.
Computer Science > Artificial Intelligence arXiv:2602.01664 (cs) [Submitted on 2 Feb 2026 (v1), last revised 17 Feb 2026 (this version, v3)] Title:FlowSteer: Interactive Agentic Workflow Orchestration via End-to-End Reinforcement Learning Authors:Mingda Zhang, Haoran Luo, Tiesunlong Shen, Qika Lin, Xiaoying Tang, Rui Mao, Erik Cambria View a PDF of the paper titled FlowSteer: Interactive Agentic Workflow Orchestration via End-to-End Reinforcement Learning, by Mingda Zhang and 6 other authors View PDF HTML (experimental) Abstract:In recent years, a variety of powerful agentic workflows have been applied to solve a wide range of human problems. However, existing workflow orchestration still faces key challenges, including high manual cost, reliance on specific operators/large language models (LLMs), and sparse reward signals. To address these challenges, we propose FlowSteer, an end-to-end reinforcement learning framework that takes a lightweight policy model as the agent and an executable canvas environment, automating workflow orchestration through multi-turn interaction. In this process, the policy model analyzes execution states and selects editing actions, while the canvas executes operators and returns feedback for iterative refinement. Moreover, FlowSteer provides a plug-and-play framework that supports diverse operator libraries and interchangeable LLM backends. To effectively train this interaction paradigm, we propose Canvas Workflow Relative Policy Optimization (C...