[2603.29281] PRISM: A Multi-View Multi-Capability Retail Video Dataset for Embodied Vision-Language Models
About this article
Abstract page for arXiv paper 2603.29281: PRISM: A Multi-View Multi-Capability Retail Video Dataset for Embodied Vision-Language Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.29281 (cs) [Submitted on 31 Mar 2026] Title:PRISM: A Multi-View Multi-Capability Retail Video Dataset for Embodied Vision-Language Models Authors:Amirreza Rouhi, Parikshit Sakurikar, Satya Sai Reddy, Narsimha Menga, Anirudh Govil, Sri Harsha Chittajallu, Rajat Aggarwal, Anoop Namboodiri, Sashi Reddi View a PDF of the paper titled PRISM: A Multi-View Multi-Capability Retail Video Dataset for Embodied Vision-Language Models, by Amirreza Rouhi and 8 other authors View PDF HTML (experimental) Abstract:A critical gap exists between the general-purpose visual understanding of state-of-the-art physical AI models and the specialized perceptual demands of structured real-world deployment environments. We present PRISM, a 270K-sample multi-view video supervised fine-tuning (SFT) corpus for embodied vision-language-models (VLMs) in real-world retail environments. PRISM is motivated by a simple observation - physical AI systems fail not because of poor visual recognition, but because they do not understand space, physical dynamics and embodied action well enough to operate reliably in the world. To this end, PRISM is grounded in a novel three-dimensional knowledge ontology that spans spatial knowledge, temporal and physical knowledge, and embodied action knowledge. It covers 20+ capability probes across four evaluation dimensions - Embodied Reasoning (ER), Common Sense (CS), Spatial Perception (SP), and Intuitive Ph...