[2509.25845] Training-Free Reward-Guided Image Editing via Trajectory Optimal Control
About this article
Abstract page for arXiv paper 2509.25845: Training-Free Reward-Guided Image Editing via Trajectory Optimal Control
Computer Science > Computer Vision and Pattern Recognition arXiv:2509.25845 (cs) [Submitted on 30 Sep 2025 (v1), last revised 4 Mar 2026 (this version, v2)] Title:Training-Free Reward-Guided Image Editing via Trajectory Optimal Control Authors:Jinho Chang, Jaemin Kim, Jong Chul Ye View a PDF of the paper titled Training-Free Reward-Guided Image Editing via Trajectory Optimal Control, by Jinho Chang and 2 other authors View PDF HTML (experimental) Abstract:Recent advancements in diffusion and flow-matching models have demonstrated remarkable capabilities in high-fidelity image synthesis. A prominent line of research involves reward-guided guidance, which steers the generation process during inference to align with specific objectives. However, leveraging this reward-guided approach to the task of image editing, which requires preserving the semantic content of the source image while enhancing a target reward, is largely unexplored. In this work, we introduce a novel framework for training-free, reward-guided image editing. We formulate the editing process as a trajectory optimal control problem where the reverse process of a diffusion model is treated as a controllable trajectory originating from the source image, and the adjoint states are iteratively updated to steer the editing process. Through extensive experiments across distinct editing tasks, we demonstrate that our approach significantly outperforms existing inversion-based training-free guidance baselines, achievin...