[2603.14579] Medical Image Spatial Grounding with Semantic Sampling
About this article
Abstract page for arXiv paper 2603.14579: Medical Image Spatial Grounding with Semantic Sampling
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.14579 (cs) [Submitted on 15 Mar 2026 (v1), last revised 20 Mar 2026 (this version, v2)] Title:Medical Image Spatial Grounding with Semantic Sampling Authors:Andrew Seohwan Yu, Mohsen Hariri, Kunio Nakamura, Mingrui Yang, Xiaojuan Li, Vipin Chaudhary View a PDF of the paper titled Medical Image Spatial Grounding with Semantic Sampling, by Andrew Seohwan Yu and 5 other authors View PDF HTML (experimental) Abstract:Vision language models (VLMs) have shown significant promise in visual grounding for images as well as videos. In medical imaging research, VLMs represent a bridge between object detection and segmentation, and report understanding and generation. However, spatial grounding of anatomical structures in the three-dimensional space of medical images poses many unique challenges. In this study, we examine image modalities, slice directions, and coordinate systems as differentiating factors for vision components of VLMs, and the use of anatomical, directional, and relational terminology as factors for the language components. We then demonstrate that visual and textual prompting systems such as labels, bounding boxes, and mask overlays have varying effects on the spatial grounding ability of VLMs. To enable measurement and reproducibility, we introduce MIS-Ground, a benchmark that comprehensively tests a VLM for vulnerabilities against specific modes of Medical Image Spatial Grounding. We release MIS-...