[2507.01352] Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy
About this article
Abstract page for arXiv paper 2507.01352: Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy
Computer Science > Computation and Language arXiv:2507.01352 (cs) [Submitted on 2 Jul 2025 (v1), last revised 2 Mar 2026 (this version, v3)] Title:Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy Authors:Chris Yuhao Liu, Liang Zeng, Yuzhen Xiao, Jujie He, Jiacai Liu, Chaojie Wang, Rui Yan, Wei Shen, Fuxiang Zhang, Jiacheng Xu, Yang Liu, Yahui Zhou View a PDF of the paper titled Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy, by Chris Yuhao Liu and 11 other authors View PDF HTML (experimental) Abstract:Despite the critical role of reward models (RMs) in Reinforcement Learning from Human Feedback (RLHF), current state-of-the-art open RMs perform poorly on most existing evaluation benchmarks, failing to capture nuanced human preferences. We hypothesize that this brittleness stems primarily from limitations in preference datasets, which are often narrowly scoped, synthetically labeled, or lack rigorous quality control. To address these challenges, we present SynPref-40M, a large-scale preference dataset comprising 40 million preference pairs. To enable data curation at scale, we design a human-AI synergistic two-stage pipeline that leverages the complementary strengths of human annotation quality and AI scalability. In this pipeline, humans provide verified annotations, while LLMs perform automatic curation based on human guidance. Training on this preference mixture, we introduce Skywork-Reward-V2, a suite of eight reward mode...