[2603.04851] Why Is RLHF Alignment Shallow? A Gradient Analysis
About this article
Abstract page for arXiv paper 2603.04851: Why Is RLHF Alignment Shallow? A Gradient Analysis
Computer Science > Machine Learning arXiv:2603.04851 (cs) [Submitted on 5 Mar 2026] Title:Why Is RLHF Alignment Shallow? A Gradient Analysis Authors:Robin Young View a PDF of the paper titled Why Is RLHF Alignment Shallow? A Gradient Analysis, by Robin Young View PDF HTML (experimental) Abstract:Why is safety alignment in LLMs shallow? We prove that gradient-based alignment inherently concentrates on positions where harm is decided and vanishes beyond. Using a martingale decomposition of sequence-level harm, we derive an exact characterization of alignment gradients. The gradient at position $t$ equals the covariance between the conditional expected harm and the score function. This implies that positions beyond the harm horizon where the output's harmfulness is already determined receive zero gradient signal during training. This explains empirical observations that KL divergence between aligned and base models concentrates on early tokens. Consequently, standard alignment objectives cannot produce deep alignment, regardless of optimization quality. We introduce the concept of harm information $I_t$, which quantifies each position's influence on harm, and prove that equilibrium KL divergence tracks this quantity. Finally, we derive an objective based on recovery penalties that creates gradient signal at all positions, providing theoretical grounding for empirically successful data augmentation techniques. Subjects: Machine Learning (cs.LG); Computation and Language (cs.CL...