[2603.20212] Fast-Slow Thinking RM: Efficient Integration of Scalar and Generative Reward Models
About this article
Abstract page for arXiv paper 2603.20212: Fast-Slow Thinking RM: Efficient Integration of Scalar and Generative Reward Models
Computer Science > Computation and Language arXiv:2603.20212 (cs) [Submitted on 2 Mar 2026] Title:Fast-Slow Thinking RM: Efficient Integration of Scalar and Generative Reward Models Authors:Jiayun Wu, Peixu Hou, Shan Qu, Peng Zhang, Ning Gu, Tun Lu View a PDF of the paper titled Fast-Slow Thinking RM: Efficient Integration of Scalar and Generative Reward Models, by Jiayun Wu and 5 other authors View PDF HTML (experimental) Abstract:Reward models (RMs) are critical for aligning Large Language Models via Reinforcement Learning from Human Feedback (RLHF). While Generative Reward Models (GRMs) achieve superior accuracy through chain-of-thought (CoT) reasoning, they incur substantial computational costs. Conversely, Scalar Reward Models (SRMs) offer efficiency but suffer from limited performance and adaptability in complex scenarios. We introduce Fast-Slow Thinking Reward Models (F/S-RM), a hybrid RM architecture inspired by Dual Process Theory. It trains a single model to integrate two distinct reward paradigms: first-token prediction as a scalar score (fast thinking) and CoT-based judgment (slow thinking), regulated by a dual-confidence activation mechanism that determines when to activate slow thinking. F/S-RM achieves a 1.2% relative performance improvement over state-of-the-art models while reducing token consumption by 20.8%. Code and data will be publicly available. Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG) Cite as: arXiv:2603.20212 [cs.CL] (...