[2510.12264] Reducing Belief Deviation in Reinforcement Learning for Active Reasoning
About this article
Abstract page for arXiv paper 2510.12264: Reducing Belief Deviation in Reinforcement Learning for Active Reasoning
Computer Science > Artificial Intelligence arXiv:2510.12264 (cs) [Submitted on 14 Oct 2025 (v1), last revised 3 Mar 2026 (this version, v2)] Title:Reducing Belief Deviation in Reinforcement Learning for Active Reasoning Authors:Deyu Zou, Yongqiang Chen, Jianxiang Wang, Haochen Yang, Mufei Li, James Cheng, Pan Li, Yu Gong View a PDF of the paper titled Reducing Belief Deviation in Reinforcement Learning for Active Reasoning, by Deyu Zou and 7 other authors View PDF HTML (experimental) Abstract:Active reasoning requires large language model (LLM) agents to interact with external sources and strategically gather information to solve problems in multiple turns. Central to this process is belief tracking: maintaining an accurate representation of the underlying state and uncertainty in understanding and solving the problem. However, due to limited reasoning capabilities, LLM-based agents often suffer belief deviation: their internal beliefs drift from the true problem state, leading to loss of state awareness and uninformative or repetitive actions. Once this happens, errors compound in the trajectories used for reinforcement learning (RL), leading to misattributed credits and limited exploration. To address this issue, we propose to track belief deviation and develop $\mathbf{T^3}$, a simple yet principled method that detects excessive deviation and truncates training trajectories to suppress uninformative tail effects. Hence, $\mathbf{T^3}$ preserves credits for informative p...