[2604.08991] PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos
About this article
Abstract page for arXiv paper 2604.08991: PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos
Computer Science > Computer Vision and Pattern Recognition arXiv:2604.08991 (cs) [Submitted on 10 Apr 2026] Title:PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos Authors:Zhiyu Zhou, Peilin Liu, Ruoxuan Zhang, Luyang Zhang, Cheng Zhang, Hongxia Xie, Wen-Huang Cheng View a PDF of the paper titled PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos, by Zhiyu Zhou and 6 other authors View PDF HTML (experimental) Abstract:Small object-centric spatial understanding in indoor videos remains a significant challenge for multimodal large language models (MLLMs), despite its practical value for object search and assistive applications. Although existing benchmarks have advanced video spatial intelligence, embodied reasoning, and diagnostic perception, no existing benchmark directly evaluates whether a model can localize a target object in video and express its position with sufficient precision for downstream use. In this work, we introduce PinpointQA, the first dataset and benchmark for small object-centric spatial understanding in indoor videos. Built from ScanNet++ and ScanNet200, PinpointQA comprises 1,024 scenes and 10,094 QA pairs organized into four progressively challenging tasks: Target Presence Verification (TPV), Nearest Reference Identification (NRI), Fine-Grained Spatial Description (FSD), and Structured Spatial Prediction (SSP). The dataset is built from intermediate spat...