[2605.07649] Operating Within the Operational Design Domain: Zero-Shot Perception with Vision-Language Models
About this article
Abstract page for arXiv paper 2605.07649: Operating Within the Operational Design Domain: Zero-Shot Perception with Vision-Language Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2605.07649 (cs) [Submitted on 8 May 2026] Title:Operating Within the Operational Design Domain: Zero-Shot Perception with Vision-Language Models Authors:Berkehan Ünal, Dierend Hauke, Fazlija Dren, Plachetka Christopher View a PDF of the paper titled Operating Within the Operational Design Domain: Zero-Shot Perception with Vision-Language Models, by Berkehan \"Unal and 2 other authors View PDF HTML (experimental) Abstract:Over the last few years, research on autonomous systems has matured to such a degree that the field is increasingly well-positioned to translate research into practical, stakeholder-driven use cases across well-defined domains. However, for a wide-scale practical adoption of autonomous systems, adherence to safety regulations is crucial. Many regulations are influenced by the Operational Design Domain (ODD), which defines the specific conditions in which an autonomous agent can function. This is especially relevant for Automated Driving Systems (ADS), as a dependable perception of ODD elements is essential for safe implementation and auditing. Vision-language models (VLMs) integrate visual recognition and language reasoning, functioning without task-specific training data, which makes them suitable for adaptable ODD perception. To assess whether VLMs can function as zero-shot "ODD sensors" that adapt to evolving definitions, we contribute (i) an empirical study of zero-shot ODD classification...