[2512.08639] Aerial Vision-Language Navigation with a Unified Framework for Spatial, Temporal and Embodied Reasoning
Summary
This article presents a unified framework for Aerial Vision-Language Navigation (VLN), enabling UAVs to interpret natural language and navigate urban environments using only monocular RGB inputs.
Why It Matters
The research addresses the limitations of existing navigation methods that require complex inputs, making it more feasible for lightweight UAVs to operate in real-world scenarios. This has significant implications for applications like search-and-rescue and autonomous delivery, enhancing the practicality and efficiency of UAV technology.
Key Takeaways
- Introduces a novel framework for aerial VLN using monocular RGB observations.
- Optimizes navigation through prompt-guided multi-task learning.
- Implements keyframe selection to reduce visual redundancy.
- Achieves strong performance in both seen and unseen environments.
- Addresses long-tailed supervision imbalance for stable training.
Computer Science > Computer Vision and Pattern Recognition arXiv:2512.08639 (cs) [Submitted on 9 Dec 2025 (v1), last revised 25 Feb 2026 (this version, v2)] Title:Aerial Vision-Language Navigation with a Unified Framework for Spatial, Temporal and Embodied Reasoning Authors:Huilin Xu, Zhuoyang Liu, Yixiang Luomei, Feng Xu View a PDF of the paper titled Aerial Vision-Language Navigation with a Unified Framework for Spatial, Temporal and Embodied Reasoning, by Huilin Xu and 3 other authors View PDF HTML (experimental) Abstract:Aerial Vision-and-Language Navigation (VLN) aims to enable unmanned aerial vehicles (UAVs) to interpret natural language instructions and navigate complex urban environments using onboard visual observation. This task holds promise for real-world applications such as low-altitude inspection, search-and-rescue, and autonomous aerial delivery. Existing methods often rely on panoramic images, depth inputs, or odometry to support spatial reasoning and action planning. These requirements increase system cost and integration complexity, thus hindering practical deployment for lightweight UAVs. We present a unified aerial VLN framework that operates solely on egocentric monocular RGB observations and natural language instructions. The model formulates navigation as a next-token prediction problem, jointly optimizing spatial perception, trajectory reasoning, and action prediction through prompt-guided multi-task learning. Moreover, we propose a keyframe select...