[2603.00492] ArtiFixer: Enhancing and Extending 3D Reconstruction with Auto-Regressive Diffusion Models
About this article
Abstract page for arXiv paper 2603.00492: ArtiFixer: Enhancing and Extending 3D Reconstruction with Auto-Regressive Diffusion Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.00492 (cs) [Submitted on 28 Feb 2026] Title:ArtiFixer: Enhancing and Extending 3D Reconstruction with Auto-Regressive Diffusion Models Authors:Riccardo de Lutio, Tobias Fischer, Yen-Yu Chang, Yuxuan Zhang, Jay Zhangjie Wu, Xuanchi Ren, Tianchang Shen, Katarina Tothova, Zan Gojcic, Haithem Turki View a PDF of the paper titled ArtiFixer: Enhancing and Extending 3D Reconstruction with Auto-Regressive Diffusion Models, by Riccardo de Lutio and 9 other authors View PDF HTML (experimental) Abstract:Per-scene optimization methods such as 3D Gaussian Splatting provide state-of-the-art novel view synthesis quality but extrapolate poorly to under-observed areas. Methods that leverage generative priors to correct artifacts in these areas hold promise but currently suffer from two shortcomings. The first is scalability, as existing methods use image diffusion models or bidirectional video models that are limited in the number of views they can generate in a single pass (and thus require a costly iterative distillation process for consistency). The second is quality itself, as generators used in prior work tend to produce outputs that are inconsistent with existing scene content and fail entirely in completely unobserved regions. To solve these, we propose a two-stage pipeline that leverages two key insights. First, we train a powerful bidirectional generative model with a novel opacity mixing strategy that encourage...