[2602.10609] Online Causal Kalman Filtering for Stable and Effective Policy Optimization
About this article
Abstract page for arXiv paper 2602.10609: Online Causal Kalman Filtering for Stable and Effective Policy Optimization
Computer Science > Computation and Language arXiv:2602.10609 (cs) [Submitted on 11 Feb 2026 (v1), last revised 1 Mar 2026 (this version, v2)] Title:Online Causal Kalman Filtering for Stable and Effective Policy Optimization Authors:Shuo He, Lang Feng, Xin Cheng, Lei Feng, Bo An View a PDF of the paper titled Online Causal Kalman Filtering for Stable and Effective Policy Optimization, by Shuo He and 4 other authors View PDF HTML (experimental) Abstract:Reinforcement learning for large language models suffers from high-variance token-level importance sampling (IS) ratios, which would destabilize policy optimization at scale. To improve stability, recent methods typically use a fixed sequence-level IS ratio for all tokens in a sequence or adjust each token's IS ratio separately, thereby neglecting temporal off-policy derivation across tokens in a sequence. In this paper, we first empirically identify that local off-policy deviation is structurally inconsistent at the token level, which may distort policy-gradient updates across adjacent tokens and lead to training collapse. To address the issue, we propose Online Causal Kalman Filtering for stable and effective Policy Optimization (KPO). Concretely, we model the desired IS ratio as a latent state that evolves across tokens and apply a Kalman filter to update this state online and autoregressively based on the states of past tokens, regardless of future tokens. The resulting filtered IS ratios preserve token-wise local structu...