[2603.07080] VLN-Cache: Enabling Token Caching for VLN Models with Visual/Semantic Dynamics Awareness
About this article
Abstract page for arXiv paper 2603.07080: VLN-Cache: Enabling Token Caching for VLN Models with Visual/Semantic Dynamics Awareness
Computer Science > Robotics arXiv:2603.07080 (cs) [Submitted on 7 Mar 2026 (v1), last revised 29 Apr 2026 (this version, v3)] Title:VLN-Cache: Enabling Token Caching for VLN Models with Visual/Semantic Dynamics Awareness Authors:Zihao Zheng, Zhihao Mao, Xingyue Zhou, Jiayu Chen, Maoliang Li, Xinhao Sun, Hailong Zou, Zhaobo Zhang, Xuanzhe Liu, Donggang Cao, Hong Mei, Xiang Chen View a PDF of the paper titled VLN-Cache: Enabling Token Caching for VLN Models with Visual/Semantic Dynamics Awareness, by Zihao Zheng and 11 other authors View PDF HTML (experimental) Abstract:Vision-and-Language Navigation (VLN) increasingly relies on large vision-language models, but their inference cost conflicts with real-time deployment. Token caching is a promising training-free strategy that avoids redundant computation by reusing stable visual tokens across frames. However, existing methods assume a static camera and fixed semantic focus, assumptions that VLN fundamentally violates. We identify two failure modes: (1) visual dynamics, where viewpoint shift displaces token positions across frames, causing position-wise matching to pair misaligned content; (2) semantic dynamics, where token relevance shifts across task stages as navigation progresses, making cached states stale. We propose VLN-Cache, a visual-dynamic-aware and semantic-dynamic-aware caching framework that introduces view-aligned remapping to recover geometric correspondences and a task-relevance saliency filter to veto reuse a...