[2603.11601] See, Symbolize, Act: Grounding VLMs with Spatial Representations for Better Gameplay
About this article
Abstract page for arXiv paper 2603.11601: See, Symbolize, Act: Grounding VLMs with Spatial Representations for Better Gameplay
Computer Science > Artificial Intelligence arXiv:2603.11601 (cs) [Submitted on 12 Mar 2026 (v1), last revised 27 Mar 2026 (this version, v2)] Title:See, Symbolize, Act: Grounding VLMs with Spatial Representations for Better Gameplay Authors:Ashish Baghel, Paras Chopra View a PDF of the paper titled See, Symbolize, Act: Grounding VLMs with Spatial Representations for Better Gameplay, by Ashish Baghel and 1 other authors View PDF HTML (experimental) Abstract:Vision-Language Models (VLMs) excel at describing visual scenes, yet struggle to translate perception into precise, grounded actions. We investigate whether providing VLMs with both the visual frame and the symbolic representation of the scene can improve their performance in interactive environments. We evaluate three state-of-the-art VLMs across Atari games, VizDoom, and AI2-THOR, comparing frame-only, frame with self-extracted symbols, frame with ground-truth symbols, and symbol-only pipelines. Our results indicate that all models benefit when the symbolic information is accurate. However, when VLMs extract symbols themselves, performance becomes dependent on model capability and scene complexity. We further investigate how accurately VLMs can extract symbolic information from visual inputs and how noise in these symbols affects decision-making and gameplay performance. Our findings reveal that symbolic grounding is beneficial in VLMs only when symbol extraction is reliable, and highlight perception quality as a centr...