[2603.03983] GeoSeg: Training-Free Reasoning-Driven Segmentation in Remote Sensing Imagery
About this article
Abstract page for arXiv paper 2603.03983: GeoSeg: Training-Free Reasoning-Driven Segmentation in Remote Sensing Imagery
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.03983 (cs) [Submitted on 4 Mar 2026] Title:GeoSeg: Training-Free Reasoning-Driven Segmentation in Remote Sensing Imagery Authors:Lifan Jiang, Yuhang Pei, oxi Wu, Yan Zhao, Tianrun Wu, Shulong Yu, Lihui Zhang, Deng Cai View a PDF of the paper titled GeoSeg: Training-Free Reasoning-Driven Segmentation in Remote Sensing Imagery, by Lifan Jiang and 7 other authors View PDF HTML (experimental) Abstract:Recent advances in MLLMs are reframing segmentation from fixed-category prediction to instruction-grounded localization. While reasoning based segmentation has progressed rapidly in natural scenes, remote sensing lacks a generalizable solution due to the prohibitive cost of reasoning-oriented data and domain-specific challenges like overhead viewpoints. We present GeoSeg, a zero-shot, training-free framework that bypasses the supervision bottleneck for reasoning-driven remote sensing segmentation. GeoSeg couples MLLM reasoning with precise localization via: (i) bias-aware coordinate refinement to correct systematic grounding shifts and (ii) a dual-route prompting mechanism to fuse semantic intent with fine-grained spatial cues. We also introduce GeoSeg-Bench, a diagnostic benchmark of 810 image--query pairs with hierarchical difficulty levels. Experiments show that GeoSeg consistently outperforms all baselines, with extensive ablations confirming the effectiveness and necessity of each component. Subjects: Comp...