[2603.28053] Reducing Oracle Feedback with Vision-Language Embeddings for Preference-Based RL
About this article
Abstract page for arXiv paper 2603.28053: Reducing Oracle Feedback with Vision-Language Embeddings for Preference-Based RL
Computer Science > Machine Learning arXiv:2603.28053 (cs) [Submitted on 30 Mar 2026] Title:Reducing Oracle Feedback with Vision-Language Embeddings for Preference-Based RL Authors:Udita Ghosh, Dripta S. Raychaudhuri, Jiachen Li, Konstantinos Karydis, Amit Roy-Chowdhury View a PDF of the paper titled Reducing Oracle Feedback with Vision-Language Embeddings for Preference-Based RL, by Udita Ghosh and 4 other authors View PDF HTML (experimental) Abstract:Preference-based reinforcement learning can learn effective reward functions from comparisons, but its scalability is constrained by the high cost of oracle feedback. Lightweight vision-language embedding (VLE) models provide a cheaper alternative, but their noisy outputs limit their effectiveness as standalone reward generators. To address this challenge, we propose ROVED, a hybrid framework that combines VLE-based supervision with targeted oracle feedback. Our method uses the VLE to generate segment-level preferences and defers to an oracle only for samples with high uncertainty, identified through a filtering mechanism. In addition, we introduce a parameter-efficient fine-tuning method that adapts the VLE with the obtained oracle feedback in order to improve the model over time in a synergistic fashion. This ensures the retention of the scalability of embeddings and the accuracy of oracles, while avoiding their inefficiencies. Across multiple robotic manipulation tasks, ROVED matches or surpasses prior preference-based met...