[2602.19345] Smooth Gate Functions for Soft Advantage Policy Optimization
Summary
This paper explores Smooth Gate Functions for Soft Advantage Policy Optimization, enhancing the stability of large language model training by replacing hard clipping with smooth sigmoid-based gates.
Why It Matters
The findings contribute to the ongoing development of more stable training methods for large language models, addressing issues of instability in policy optimization. This is crucial for improving the performance and reliability of AI systems in various applications, particularly in reasoning tasks.
Key Takeaways
- Smooth Gate Functions can improve training stability in large language models.
- Replacing hard clipping with smooth gates leads to better model performance.
- The paper formalizes properties that admissible gates should satisfy.
- Empirical evaluations highlight the effectiveness of different gate functions.
- Findings provide practical guidance for designing robust optimization objectives.
Computer Science > Machine Learning arXiv:2602.19345 (cs) [Submitted on 22 Feb 2026] Title:Smooth Gate Functions for Soft Advantage Policy Optimization Authors:Egor Denisov, Svetlana Glazyrina, Maksim Kryzhanovskiy, Roman Ischenko View a PDF of the paper titled Smooth Gate Functions for Soft Advantage Policy Optimization, by Egor Denisov and 3 other authors View PDF HTML (experimental) Abstract:Group Relative Policy Optimization (GRPO) has significantly advanced the training of large language models and enhanced their reasoning capabilities, while it remains susceptible to instability due to the use of hard clipping. Soft Adaptive Policy Optimization (SAPO) addresses this limitation by replacing clipping with a smooth sigmoid-based gate function, which leads to more stable updates. We have decided to push this theory further and investigate the impact of different gate functions on both training stability and final model performance. We formalize the key properties that admissible gates should satisfy and identify several families of such functions for empirical evaluation. This paper presents an analysis of our findings based on experiments conducted with the Qwen2.5-7B-Instruct model on mathematical reasoning tasks. These results provide practical guidance for designing smoother and more robust policy optimization objectives for large language model training. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.19345 [cs.LG] (or arXiv...