[2512.13157] Intrinsic Image Fusion for Multi-View 3D Material Reconstruction
About this article
Abstract page for arXiv paper 2512.13157: Intrinsic Image Fusion for Multi-View 3D Material Reconstruction
Computer Science > Computer Vision and Pattern Recognition arXiv:2512.13157 (cs) [Submitted on 15 Dec 2025 (v1), last revised 21 Mar 2026 (this version, v2)] Title:Intrinsic Image Fusion for Multi-View 3D Material Reconstruction Authors:Peter Kocsis (1), Lukas Höllein (1), Matthias Nießner (1) ((1) Technical University of Munich) View a PDF of the paper titled Intrinsic Image Fusion for Multi-View 3D Material Reconstruction, by Peter Kocsis (1) and 2 other authors View PDF HTML (experimental) Abstract:We introduce Intrinsic Image Fusion, a method that reconstructs high-quality physically based materials from multi-view images. Material reconstruction is highly underconstrained and typically relies on analysis-by-synthesis, which requires expensive and noisy path tracing. To better constrain the optimization, we incorporate single-view priors into the reconstruction process. We leverage a diffusion-based material estimator that produces multiple, but often inconsistent, candidate decompositions per view. To reduce the inconsistency, we fit an explicit low-dimensional parametric function to the predictions. We then propose a robust optimization framework using soft per-view prediction selection together with confidence-based soft multi-view inlier set to fuse the most consistent predictions of the most confident views into a consistent parametric material space. Finally, we use inverse path tracing to optimize for the low-dimensional parameters. Our results outperform state-...