[2511.22228] 3D-Consistent Multi-View Editing by Correspondence Guidance
About this article
Abstract page for arXiv paper 2511.22228: 3D-Consistent Multi-View Editing by Correspondence Guidance
Computer Science > Computer Vision and Pattern Recognition arXiv:2511.22228 (cs) [Submitted on 27 Nov 2025 (v1), last revised 20 Mar 2026 (this version, v2)] Title:3D-Consistent Multi-View Editing by Correspondence Guidance Authors:Josef Bengtson, David Nilsson, Dong In Lee, Yaroslava Lochman, Fredrik Kahl View a PDF of the paper titled 3D-Consistent Multi-View Editing by Correspondence Guidance, by Josef Bengtson and 4 other authors View PDF HTML (experimental) Abstract:Recent advancements in diffusion and flow models have greatly improved text-based image editing, yet methods that edit images independently often produce geometrically and photometrically inconsistent results across different views of the same scene. Such inconsistencies are particularly problematic for editing of 3D representations such as NeRFs or Gaussian splat models. We propose a training-free guidance framework that enforces multi-view consistency during the image editing process. The key idea is that corresponding points should look similar after editing. To achieve this, we introduce a consistency loss that guides the denoising process toward coherent edits. The framework is flexible and can be combined with widely varying image editing methods, supporting both dense and sparse multi-view editing setups. Experimental results show that our approach significantly improves 3D consistency compared to existing multi-view editing methods. We also show that this increased consistency enables high-quality ...