[2603.25937] Can Vision Foundation Models Navigate? Zero-Shot Real-World Evaluation and Lessons Learned
About this article
Abstract page for arXiv paper 2603.25937: Can Vision Foundation Models Navigate? Zero-Shot Real-World Evaluation and Lessons Learned
Computer Science > Robotics arXiv:2603.25937 (cs) [Submitted on 26 Mar 2026] Title:Can Vision Foundation Models Navigate? Zero-Shot Real-World Evaluation and Lessons Learned Authors:Maeva Guerrier, Karthik Soma, Jana Pavlasek, Giovanni Beltrame View a PDF of the paper titled Can Vision Foundation Models Navigate? Zero-Shot Real-World Evaluation and Lessons Learned, by Maeva Guerrier and 3 other authors View PDF HTML (experimental) Abstract:Visual Navigation Models (VNMs) promise generalizable, robot navigation by learning from large-scale visual demonstrations. Despite growing real-world deployment, existing evaluations rely almost exclusively on success rate, whether the robot reaches its goal, which conceals trajectory quality, collision behavior, and robustness to environmental change. We present a real-world evaluation of five state-of-the-art VNMs (GNM, ViNT, NoMaD, NaviBridger, and CrossFormer) across two robot platforms and five environments spanning indoor and outdoor settings. Beyond success rate, we combine path-based metrics with vision-based goal-recognition scores and assess robustness through controlled image perturbations (motion blur, sunflare). Our analysis uncovers three systematic limitations: (a) even architecturally sophisticated diffusion and transformer-based models exhibit frequent collisions, indicating limited geometric understanding; (b) models fail to discriminate between different locations that are perceptually similar, however some semantics ...