[2505.19653] Token-Importance Guided Direct Preference Optimization
About this article
Abstract page for arXiv paper 2505.19653: Token-Importance Guided Direct Preference Optimization
Computer Science > Artificial Intelligence arXiv:2505.19653 (cs) [Submitted on 26 May 2025 (v1), last revised 2 Mar 2026 (this version, v3)] Title:Token-Importance Guided Direct Preference Optimization Authors:Ning Yang, Hai Lin, Yibo Liu, Baoliang Tian, Guoqing Liu, Haijun Zhang View a PDF of the paper titled Token-Importance Guided Direct Preference Optimization, by Ning Yang and 5 other authors View PDF HTML (experimental) Abstract:Aligning Large Language Models (LLMs) with human preferences is crucial for safe and effective AI interactions. While popular methods like Direct Preference Optimization (DPO) have simplified alignment, they remain sensitive to data noise and overlook the differential importance of individual tokens. Existing token-level approaches often rely on probability prediction or simplistic weighting schemes to obtain token importance, which still cannot fully address these issues. To solve this problem, we propose the Token-Importance Guided Direct Preference Optimization (TI-DPO), a framework that achieves fine-grained semantic control through two synergistic innovations. First, we propose a novel hybrid weighting mechanism that combines gradient attribution with a Gaussian prior, ensuring both the accuracy and robustness of token importance scores. Second, we employ a triplet loss to provide structured guidance for the optimization, explicitly guiding model outputs to approach preferred responses and diverge from non-preferred ones. Experimental re...