[2602.16105] GPSBench: Do Large Language Models Understand GPS Coordinates?
Summary
The paper introduces GPSBench, a dataset designed to evaluate the geospatial reasoning capabilities of large language models (LLMs) using GPS coordinates across various tasks.
Why It Matters
As LLMs are increasingly integrated into applications involving real-world navigation and mapping, understanding their ability to process GPS coordinates is crucial. This research highlights the strengths and weaknesses of LLMs in geospatial reasoning, providing insights that can inform future model development and applications in AI-driven navigation systems.
Key Takeaways
- GPSBench consists of 57,800 samples across 17 tasks for evaluating LLMs' geospatial reasoning.
- LLMs show better performance in geographic reasoning than in geometric computations.
- Country-level geographic knowledge is strong, while city-level localization is weak.
- Coordinate noise robustness indicates genuine understanding rather than simple memorization.
- Finetuning can improve performance in geospatial tasks but may degrade world knowledge.
Computer Science > Artificial Intelligence arXiv:2602.16105 (cs) [Submitted on 18 Feb 2026] Title:GPSBench: Do Large Language Models Understand GPS Coordinates? Authors:Thinh Hung Truong, Jey Han Lau, Jianzhong Qi View a PDF of the paper titled GPSBench: Do Large Language Models Understand GPS Coordinates?, by Thinh Hung Truong and 2 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are increasingly deployed in applications that interact with the physical world, such as navigation, robotics, or mapping, making robust geospatial reasoning a critical capability. Despite that, LLMs' ability to reason about GPS coordinates and real-world geography remains underexplored. We introduce GPSBench, a dataset of 57,800 samples across 17 tasks for evaluating geospatial reasoning in LLMs, spanning geometric coordinate operations (e.g., distance and bearing computation) and reasoning that integrates coordinates with world knowledge. Focusing on intrinsic model capabilities rather than tool use, we evaluate 14 state-of-the-art LLMs and find that GPS reasoning remains challenging, with substantial variation across tasks: models are generally more reliable at real-world geographic reasoning than at geometric computations. Geographic knowledge degrades hierarchically, with strong country-level performance but weak city-level localization, while robustness to coordinate noise suggests genuine coordinate understanding rather than memorization. We further show th...