[2603.08561] RetroAgent: From Solving to Evolving via Retrospective Dual Intrinsic Feedback
About this article
Abstract page for arXiv paper 2603.08561: RetroAgent: From Solving to Evolving via Retrospective Dual Intrinsic Feedback
Computer Science > Artificial Intelligence arXiv:2603.08561 (cs) [Submitted on 9 Mar 2026 (v1), last revised 26 Mar 2026 (this version, v4)] Title:RetroAgent: From Solving to Evolving via Retrospective Dual Intrinsic Feedback Authors:Xiaoying Zhang, Zichen Liu, Yipeng Zhang, Xia Hu, Wenqi Shao View a PDF of the paper titled RetroAgent: From Solving to Evolving via Retrospective Dual Intrinsic Feedback, by Xiaoying Zhang and 4 other authors View PDF HTML (experimental) Abstract:Standard reinforcement learning (RL) for large language model (LLM) agents typically optimizes extrinsic rewards, prioritizing isolated task completion over continual adaptation. Consequently, agents often converge to suboptimal policies due to limited exploration. Furthermore, accumulated experience remains implicitly trapped within model parameters, hindering its explicit reuse for guiding future decisions. Inspired by human retrospective self-improvement, we introduce RetroAgent, an online RL framework that trains agents to master complex interactive environments not only by solving tasks, but by evolving under the joint guidance of extrinsic task rewards and retrospective dual intrinsic feedback. Specifically, RetroAgent employs a hindsight self-reflection mechanism that generates two complementary signals: (1) intrinsic numerical feedback, which rewards promising exploration by tracking real-time incremental subtask progress relative to prior attempts; and (2) intrinsic language feedback, which ...