[2511.10985] When Data is the Algorithm: A Systematic Study and Curation of Preference Optimization Datasets
About this article
Abstract page for arXiv paper 2511.10985: When Data is the Algorithm: A Systematic Study and Curation of Preference Optimization Datasets
Computer Science > Computation and Language arXiv:2511.10985 (cs) [Submitted on 14 Nov 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:When Data is the Algorithm: A Systematic Study and Curation of Preference Optimization Datasets Authors:Aladin Djuhera, Farhan Ahmed, Swanand Ravindra Kadhe, Syed Zawad, Heiko Ludwig, Holger Boche View a PDF of the paper titled When Data is the Algorithm: A Systematic Study and Curation of Preference Optimization Datasets, by Aladin Djuhera and 5 other authors View PDF HTML (experimental) Abstract:Aligning large language models (LLMs) is a central objective of post-training, often achieved through reward modeling and reinforcement learning methods. Among these, direct preference optimization (DPO) has emerged as a widely adopted technique that fine-tunes LLMs on preferred completions over less favorable ones. While most frontier LLMs do not disclose their curated preference pairs, the broader LLM community has released several open-source DPO datasets, including TuluDPO, ORPO, UltraFeedback, HelpSteer, and Code-Preference-Pairs. However, systematic comparisons remain scarce, largely due to the high computational cost and the lack of rich quality annotations, making it difficult to understand how preferences were selected, which task types they span, and how well they reflect human judgment on a per-sample level. In this work, we present the first comprehensive, data-centric analysis of popular open-source DPO corpora. We levera...