[2602.19327] Soft Sequence Policy Optimization: Bridging GMPO and SAPO
Summary
The paper introduces Soft Sequence Policy Optimization, a new approach to policy optimization in reinforcement learning that enhances training stability and exploration by integrating soft gating functions within sequence-level importance weights.
Why It Matters
This research addresses critical challenges in reinforcement learning, particularly in aligning large language models with sequence-level rewards. By proposing a novel optimization method that improves training stability and exploration, it contributes to the advancement of AI alignment techniques, which are essential for developing reliable AI systems.
Key Takeaways
- Introduces Soft Sequence Policy Optimization to enhance policy exploration.
- Integrates soft gating functions over token-level probability ratios.
- Addresses issues related to training stability and entropy collapse.
- Builds on existing methods like GMPO and SAPO for improved performance.
- Focuses on aligning reinforcement learning with sequence-level rewards.
Computer Science > Machine Learning arXiv:2602.19327 (cs) [Submitted on 22 Feb 2026] Title:Soft Sequence Policy Optimization: Bridging GMPO and SAPO Authors:Svetlana Glazyrina, Maksim Kryzhanovskiy, Roman Ischenko View a PDF of the paper titled Soft Sequence Policy Optimization: Bridging GMPO and SAPO, by Svetlana Glazyrina and 2 other authors View PDF HTML (experimental) Abstract:A significant portion of recent research on Large Language Model (LLM) alignment focuses on developing new policy optimization methods based on Group Relative Policy Optimization (GRPO). Two prominent directions have emerged: (i) a shift toward sequence-level importance sampling weights that better align with the sequence-level rewards used in many tasks, and (ii) alternatives to PPO-style clipping that aim to avoid the associated loss of training signal and entropy collapse. Recent work, such as Soft Adaptive Policy Optimization (SAPO), reformulates the Scopic objective within the GRPO framework and achieves both sequence coherence and token adaptivity. Geometric-Mean Policy Optimization (GMPO) leverages token-wise ratio clipping within sequence importance sampling weights. Building on these ideas, this work proposes a new objective that promotes effective policy exploration while maintaining training stability. Specifically, we introduce Soft Sequence Policy Optimization, an off-policy reinforcement learning objective that incorporates soft gating functions over token-level probability ratios w...