[2408.15339] UNA: A Unified Supervised Framework for Efficient LLM Alignment Across Feedback Types
About this article
Abstract page for arXiv paper 2408.15339: UNA: A Unified Supervised Framework for Efficient LLM Alignment Across Feedback Types
Computer Science > Machine Learning arXiv:2408.15339 (cs) [Submitted on 27 Aug 2024 (v1), last revised 7 May 2026 (this version, v4)] Title:UNA: A Unified Supervised Framework for Efficient LLM Alignment Across Feedback Types Authors:Zhichao Wang, Bin Bi, Can Huang, Shiva Kumar Pentyala, Zixu James Zhu, Sitaram Asur, Na Claire Cheng, Cheng Wan, Dong Nie, Lingzi Hong View a PDF of the paper titled UNA: A Unified Supervised Framework for Efficient LLM Alignment Across Feedback Types, by Zhichao Wang and 9 other authors View PDF HTML (experimental) Abstract:RL alignment methods, including RLHF and DPO, are primarily based on pairwise preference data. Although scalar or score-based feedback has been collected in some settings, it is rarely used directly, and preference magnitude information is typically ignored. Furthermore, current alignment frameworks offer limited capability for unifying heterogeneous supervision signals, making it difficult to jointly leverage diverse data types within a single training paradigm. This limitation constrains the richness and scalability of the alignment process. To address this gap, we propose a \textbf{UN}ified \textbf{A}lignment (UNA) framework capable of training across different types of feedback, including binary, pairwise, and score-based, through a generalized implicit reward function. The reward function is theoretically proved to be the optimal policy by the log sum inequality. Extensive experiments on classical benchmarks consisten...