[2603.26260] GeoGuide: Hierarchical Geometric Guidance for Open-Vocabulary 3D Semantic Segmentation
About this article
Abstract page for arXiv paper 2603.26260: GeoGuide: Hierarchical Geometric Guidance for Open-Vocabulary 3D Semantic Segmentation
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.26260 (cs) [Submitted on 27 Mar 2026] Title:GeoGuide: Hierarchical Geometric Guidance for Open-Vocabulary 3D Semantic Segmentation Authors:Xujing Tao, Chuxin Wang, Yubo Ai, Zhixin Cheng, Zhuoyuan Li, Liangsheng Liu, Yujia Chen, Xinjun Li, Qiao Li, Wenfei Yang, Tianzhu Zhang View a PDF of the paper titled GeoGuide: Hierarchical Geometric Guidance for Open-Vocabulary 3D Semantic Segmentation, by Xujing Tao and 10 other authors View PDF HTML (experimental) Abstract:Open-vocabulary 3D semantic segmentation aims to segment arbitrary categories beyond the training set. Existing methods predominantly rely on distilling knowledge from 2D open-vocabulary models. However, aligning 3D features to the 2D representation space restricts intrinsic 3D geometric learning and inherits errors from 2D predictions. To address these limitations, we propose GeoGuide, a novel framework that leverages pretrained 3D models to integrate hierarchical geometry-semantic consistency for open-vocabulary 3D segmentation. Specifically, we introduce an Uncertainty-based Superpoint Distillation module to fuse geometric and semantic features for estimating per-point uncertainty, adaptively weighting 2D features within superpoints to suppress noise while preserving discriminative information to enhance local semantic consistency. Furthermore, our Instance-level Mask Reconstruction module leverages geometric priors to enforce semantic consist...