[2603.19470] Adaptive Layerwise Perturbation: Unifying Off-Policy Corrections for LLM RL
About this article
Abstract page for arXiv paper 2603.19470: Adaptive Layerwise Perturbation: Unifying Off-Policy Corrections for LLM RL
Computer Science > Machine Learning arXiv:2603.19470 (cs) [Submitted on 19 Mar 2026] Title:Adaptive Layerwise Perturbation: Unifying Off-Policy Corrections for LLM RL Authors:Chenlu Ye, Xuanchang Zhang, Yifan Hao, Zhou Yu, Ziji Zhang, Abhinav Gullapalli, Hao Chen, Jing Huang, Tong Zhang View a PDF of the paper titled Adaptive Layerwise Perturbation: Unifying Off-Policy Corrections for LLM RL, by Chenlu Ye and 8 other authors View PDF HTML (experimental) Abstract:Off-policy problems such as policy staleness and training-inference mismatch, has become a major bottleneck for training stability and further exploration for LLM RL. To enhance inference efficiency, the distribution gap between the inference and updated policy grows, leading to heavy-tailed importance ratios. Heavy-tailed ratios arise when the policy is locally sharp, which further inflates sharp gradients and can push updates outside the trust region. To address this, we propose Adaptive Layerwise Perturbation(ALP) by injecting small learnable perturbations into input hidden states of each layer during updates, which is used as the numerator of the importance ratio against the unchanged inference policy in the objective. Intuitively, by adding controlled noise to intermediate representations, ALP prevents the updated policy from deviating too sharply from the inference policy, and enlarges the policy family to cover the inference policy family with mismatch noises. Hence, the flattened distribution can naturally ...