[2603.00140] Steering Away from Memorization: Reachability-Constrained Reinforcement Learning for Text-to-Image Diffusion
About this article
Abstract page for arXiv paper 2603.00140: Steering Away from Memorization: Reachability-Constrained Reinforcement Learning for Text-to-Image Diffusion
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.00140 (cs) [Submitted on 24 Feb 2026] Title:Steering Away from Memorization: Reachability-Constrained Reinforcement Learning for Text-to-Image Diffusion Authors:Sathwik Karnik, Juyeop Kim, Sanmi Koyejo, Jong-Seok Lee, Somil Bansal View a PDF of the paper titled Steering Away from Memorization: Reachability-Constrained Reinforcement Learning for Text-to-Image Diffusion, by Sathwik Karnik and 4 other authors View PDF HTML (experimental) Abstract:Text-to-image diffusion models often memorize training data, revealing a fundamental failure to generalize beyond the training set. Current mitigation strategies typically sacrifice image quality or prompt alignment to reduce memorization. To address this, we propose Reachability-Aware Diffusion Steering (RADS), an inference-time framework that prevents memorization while preserving generation fidelity. RADS models the diffusion denoising process as a dynamical system and applies concepts from reachability analysis to approximate the "backward reachable tube"--the set of intermediate states that inevitably evolve into memorized samples. We then formulate mitigation as a constrained reinforcement learning (RL) problem, where a policy learns to steer the trajectory away from memorization via minimal perturbations in the caption embedding space. Empirical evaluations show that RADS achieves a superior Pareto frontier between generation diversity (SSCD), quality (FID),...