[2510.02692] Fine-Tuning Diffusion Models via Intermediate Distribution Shaping
About this article
Abstract page for arXiv paper 2510.02692: Fine-Tuning Diffusion Models via Intermediate Distribution Shaping
Computer Science > Machine Learning arXiv:2510.02692 (cs) [Submitted on 3 Oct 2025 (v1), last revised 3 Mar 2026 (this version, v3)] Title:Fine-Tuning Diffusion Models via Intermediate Distribution Shaping Authors:Gautham Govind Anil, Shaan Ul Haque, Nithish Kannen, Dheeraj Nagaraj, Sanjay Shakkottai, Karthikeyan Shanmugam View a PDF of the paper titled Fine-Tuning Diffusion Models via Intermediate Distribution Shaping, by Gautham Govind Anil and 5 other authors View PDF HTML (experimental) Abstract:Diffusion models are widely used for generative tasks across domains. Given a pre-trained diffusion model, it is often desirable to fine-tune it further either to correct for errors in learning or to align with downstream applications. Towards this, we examine the effect of shaping the distribution at intermediate noise levels induced by diffusion models. First, we show that existing variants of Rejection sAmpling based Fine-Tuning (RAFT), which we unify as GRAFT, can implicitly perform KL regularized reward maximization with reshaped rewards. Motivated by this observation, we introduce P-GRAFT to shape distributions at intermediate noise levels and demonstrate empirically that this can lead to more effective fine-tuning. We mathematically explain this via a bias-variance tradeoff. Next, we look at correcting learning errors in pre-trained flow models based on the developed mathematical framework. In particular, we propose inverse noise correction, a novel algorithm to improve ...