[2603.19308] GT-Space: Enhancing Heterogeneous Collaborative Perception with Ground Truth Feature Space
About this article
Abstract page for arXiv paper 2603.19308: GT-Space: Enhancing Heterogeneous Collaborative Perception with Ground Truth Feature Space
Computer Science > Machine Learning arXiv:2603.19308 (cs) [Submitted on 13 Mar 2026] Title:GT-Space: Enhancing Heterogeneous Collaborative Perception with Ground Truth Feature Space Authors:Wentao Wang, Haoran Xu, Guang Tan View a PDF of the paper titled GT-Space: Enhancing Heterogeneous Collaborative Perception with Ground Truth Feature Space, by Wentao Wang and 2 other authors View PDF HTML (experimental) Abstract:In autonomous driving, multi-agent collaborative perception enhances sensing capabilities by enabling agents to share perceptual data. A key challenge lies in handling {\em heterogeneous} features from agents equipped with different sensing modalities or model architectures, which complicates data fusion. Existing approaches often require retraining encoders or designing interpreter modules for pairwise feature alignment, but these solutions are not scalable in practice. To address this, we propose {\em GT-Space}, a flexible and scalable collaborative perception framework for heterogeneous agents. GT-Space constructs a common feature space from ground-truth labels, providing a unified reference for feature alignment. With this shared space, agents only need a single adapter module to project their features, eliminating the need for pairwise interactions with other agents. Furthermore, we design a fusion network trained with contrastive losses across diverse modality combinations. Extensive experiments on simulation datasets (OPV2V and V2XSet) and a real-world d...