[2603.23550] Implicit Turn-Wise Policy Optimization for Proactive User-LLM Interaction
About this article
Abstract page for arXiv paper 2603.23550: Implicit Turn-Wise Policy Optimization for Proactive User-LLM Interaction
Computer Science > Machine Learning arXiv:2603.23550 (cs) [Submitted on 21 Mar 2026] Title:Implicit Turn-Wise Policy Optimization for Proactive User-LLM Interaction Authors:Haoyu Wang, Yuxin Chen, Liang Luo, Buyun Zhang, Ellie Dingqiao Wen, Pan Li View a PDF of the paper titled Implicit Turn-Wise Policy Optimization for Proactive User-LLM Interaction, by Haoyu Wang and 5 other authors View PDF HTML (experimental) Abstract:Multi-turn human-AI collaboration is fundamental to deploying interactive services such as adaptive tutoring, conversational recommendation, and professional consultation. However, optimizing these interactions via reinforcement learning is hindered by the sparsity of verifiable intermediate rewards and the high stochasticity of user responses. To address these challenges, we introduce Implicit Turn-wise Policy Optimization (ITPO). ITPO leverages an implicit process reward model to derive fine-grained, turn-wise process rewards from sparse outcome signals. Unlike volatile token-level rewards, these turn-level signals exhibit superior robustness and may utilize a normalization mechanism to further enhance training stability. We evaluate ITPO across three representative multi-turn collaborative tasks: math tutoring, document writing, and medical recommendation. Empirical results demonstrate that ITPO, when combined with PPO, GRPO, or RLOO, consistently achieves improved convergence than existing baselines. Elaborate trajectory analysis confirms that ITPO in...