[2602.21346] Alignment-Weighted DPO: A principled reasoning approach to improve safety alignment
Summary
This article presents a novel approach to enhance safety alignment in large language models (LLMs) through Alignment-Weighted Direct Preference Optimization (DPO), addressing vulnerabilities to deceptive prompts.
Why It Matters
As LLMs become more integrated into various applications, ensuring their safety against potential misuse is critical. This research contributes to the ongoing efforts to improve model robustness and ethical AI deployment, making it relevant for developers and researchers in AI safety.
Key Takeaways
- Current alignment techniques in LLMs are vulnerable to jailbreak attacks.
- The proposed Alignment-Weighted DPO enhances reasoning in model responses.
- A new Chain-of-Thought fine-tuning dataset improves principled refusals.
- The method shows improved robustness while maintaining model utility.
- Empirical results indicate significant advancements in safety alignment.
Computer Science > Computation and Language arXiv:2602.21346 (cs) [Submitted on 24 Feb 2026] Title:Alignment-Weighted DPO: A principled reasoning approach to improve safety alignment Authors:Mengxuan Hu, Vivek V. Datla, Anoop Kumar, Zihan Guan, Sheng Li, Alfy Samuel, Daben Liu View a PDF of the paper titled Alignment-Weighted DPO: A principled reasoning approach to improve safety alignment, by Mengxuan Hu and 6 other authors View PDF HTML (experimental) Abstract:Recent advances in alignment techniques such as Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and Direct Preference Optimization (DPO) have improved the safety of large language models (LLMs). However, these LLMs remain vulnerable to jailbreak attacks that disguise harmful intent through indirect or deceptive phrasing. Using causal intervention, we empirically demonstrate that this vulnerability stems from shallow alignment mechanisms that lack deep reasoning, often rejecting harmful prompts without truly understanding why they are harmful. To mitigate this vulnerability, we propose enhancing alignment through reasoning-aware post-training. We construct and release a novel Chain-of-Thought (CoT) fine-tuning dataset that includes both utility-oriented and safety-critical prompts with step-by-step rationales. Fine-tuning on this dataset encourages models to produce principled refusals grounded in reasoning, outperforming standard SFT baselines. Furthermore, inspired by failure patte...