[2602.09463] SpotAgent: Grounding Visual Geo-localization in Large Vision-Language Models through Agentic Reasoning
About this article
Abstract page for arXiv paper 2602.09463: SpotAgent: Grounding Visual Geo-localization in Large Vision-Language Models through Agentic Reasoning
Computer Science > Artificial Intelligence arXiv:2602.09463 (cs) [Submitted on 10 Feb 2026 (v1), last revised 2 Mar 2026 (this version, v3)] Title:SpotAgent: Grounding Visual Geo-localization in Large Vision-Language Models through Agentic Reasoning Authors:Furong Jia, Ling Dai, Wenjin Deng, Fan Zhang, Chen Hu, Daxin Jiang, Yu Liu View a PDF of the paper titled SpotAgent: Grounding Visual Geo-localization in Large Vision-Language Models through Agentic Reasoning, by Furong Jia and 6 other authors View PDF HTML (experimental) Abstract:Large Vision-Language Models (LVLMs) have demonstrated strong reasoning capabilities in geo-localization, yet they often struggle in real-world scenarios where visual cues are sparse, long-tailed, and highly ambiguous. Previous approaches, bound by internal knowledge, often fail to provide verifiable results, yielding confident but ungrounded predictions when faced with confounded evidence. To address these challenges, we propose SpotAgent, a framework that formalizes geo-localization into an agentic reasoning process that leverages expert-level reasoning to synergize visual interpretation with tool-assisted verification. SpotAgent actively explores and verifies visual cues by leveraging external tools (e.g., web search, maps) through a ReAct diagram. We introduce a 3-stage post-training pipeline starting with a Supervised Fine-Tuning (SFT) stage for basic alignment, followed by an Agentic Cold Start phase utilizing high-quality trajectories s...