[2603.21175] Reward Sharpness-Aware Fine-Tuning for Diffusion Models
About this article
Abstract page for arXiv paper 2603.21175: Reward Sharpness-Aware Fine-Tuning for Diffusion Models
Computer Science > Machine Learning arXiv:2603.21175 (cs) [Submitted on 22 Mar 2026] Title:Reward Sharpness-Aware Fine-Tuning for Diffusion Models Authors:Kwanyoung Kim, Byeongsu Sim View a PDF of the paper titled Reward Sharpness-Aware Fine-Tuning for Diffusion Models, by Kwanyoung Kim and 1 other authors View PDF Abstract:Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models with human preferences, inspiring the development of reward-centric diffusion reinforcement learning (RDRL) to achieve similar alignment and controllability. While diffusion models can generate high-quality outputs, RDRL remains susceptible to reward hacking, where the reward score increases without corresponding improvements in perceptual quality. We demonstrate that this vulnerability arises from the non-robustness of reward model gradients, particularly when the reward landscape with respect to the input image is sharp. To mitigate this issue, we introduce methods that exploit gradients from a robustified reward model without requiring its retraining. Specifically, we employ gradients from a flattened reward model, obtained through parameter perturbations of the diffusion model and perturbations of its generated samples. Empirically, each method independently alleviates reward hacking and improves robustness, while their joint use amplifies these benefits. Our resulting framework, RSA-FT (Reward Sharpness-Aware Fine-Tuning), is simple, broadly com...