[2510.01448] GeoSURGE: Geo-localization using Semantic Fusion with Hierarchy of Geographic Embeddings
About this article
Abstract page for arXiv paper 2510.01448: GeoSURGE: Geo-localization using Semantic Fusion with Hierarchy of Geographic Embeddings
Computer Science > Computer Vision and Pattern Recognition arXiv:2510.01448 (cs) [Submitted on 1 Oct 2025 (v1), last revised 27 Mar 2026 (this version, v2)] Title:GeoSURGE: Geo-localization using Semantic Fusion with Hierarchy of Geographic Embeddings Authors:Angel Daruna, Nicholas Meegan, Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar View a PDF of the paper titled GeoSURGE: Geo-localization using Semantic Fusion with Hierarchy of Geographic Embeddings, by Angel Daruna and 4 other authors View PDF HTML (experimental) Abstract:Worldwide visual geo-localization aims to determine the geographic location of an image anywhere on Earth using only its visual content. Despite recent progress, learning expressive representations of geographic space remains challenging due to the inherently low-dimensional nature of geographic coordinates. We formulate global geo-localization as aligning the visual representation of a query image with a learned geographic representation. Our approach explicitly models the world as a hierarchy of learned geographic embeddings, enabling a distributed and multi-scale representation of geographic space. In addition, we introduce a semantic fusion module that efficiently integrates appearance features with semantic segmentation through latent cross-attention, producing a more robust visual representation for localization. Experiments on five widely used geo-localization benchmarks demonstrate that our method achieves new state-of-the-art results on 22 ...