[2603.22829] Improving Safety Alignment via Balanced Direct Preference Optimization
About this article
Abstract page for arXiv paper 2603.22829: Improving Safety Alignment via Balanced Direct Preference Optimization
Computer Science > Artificial Intelligence arXiv:2603.22829 (cs) [Submitted on 24 Mar 2026] Title:Improving Safety Alignment via Balanced Direct Preference Optimization Authors:Shiji Zhao, Mengyang Wang, Shukun Xiong, Fangzhou Chen, Qihui Zhu, Shouwei Ruan, Yisong Xiao, Ranjie Duan, Xun Chen, XingXing Wei View a PDF of the paper titled Improving Safety Alignment via Balanced Direct Preference Optimization, by Shiji Zhao and 9 other authors View PDF HTML (experimental) Abstract:With the rapid development and widespread application of Large Language Models (LLMs), their potential safety risks have attracted widespread attention. Reinforcement Learning from Human Feedback (RLHF) has been adopted to enhance the safety performance of LLMs. As a simple and effective alternative to RLHF, Direct Preference Optimization (DPO) is widely used for safety alignment. However, safety alignment still suffers from severe overfitting, which limits its actual performance. This paper revisits the overfitting phenomenon from the perspective of the model's comprehension of the training data. We find that the Imbalanced Preference Comprehension phenomenon exists between responses in preference pairs, which compromises the model's safety performance. To address this, we propose Balanced Direct Preference Optimization (B-DPO), which adaptively modulates optimization strength between preferred and dispreferred responses based on mutual information. A series of experimental results show that B-DPO c...