[2603.22851] UniQueR: Unified Query-based Feedforward 3D Reconstruction
About this article
Abstract page for arXiv paper 2603.22851: UniQueR: Unified Query-based Feedforward 3D Reconstruction
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.22851 (cs) [Submitted on 24 Mar 2026] Title:UniQueR: Unified Query-based Feedforward 3D Reconstruction Authors:Chensheng Peng, Quentin Herau, Jiezhi Yang, Yichen Xie, Yihan Hu, Wenzhao Zheng, Matthew Strong, Masayoshi Tomizuka, Wei Zhan View a PDF of the paper titled UniQueR: Unified Query-based Feedforward 3D Reconstruction, by Chensheng Peng and 8 other authors View PDF HTML (experimental) Abstract:We present UniQueR, a unified query-based feedforward framework for efficient and accurate 3D reconstruction from unposed images. Existing feedforward models such as DUSt3R, VGGT, and AnySplat typically predict per-pixel point maps or pixel-aligned Gaussians, which remain fundamentally 2.5D and limited to visible surfaces. In contrast, UniQueR formulates reconstruction as a sparse 3D query inference problem. Our model learns a compact set of 3D anchor points that act as explicit geometric queries, enabling the network to infer scene structure, including geometry in occluded regions--in a single forward pass. Each query encodes spatial and appearance priors directly in global 3D space (instead of per-frame camera space) and spawns a set of 3D Gaussians for differentiable rendering. By leveraging unified query interactions across multi-view features and a decoupled cross-attention design, UniQueR achieves strong geometric expressiveness while substantially reducing memory and computational cost. Experiments on...