[2603.22892] VLGOR: Visual-Language Knowledge Guided Offline Reinforcement Learning for Generalizable Agents
About this article
Abstract page for arXiv paper 2603.22892: VLGOR: Visual-Language Knowledge Guided Offline Reinforcement Learning for Generalizable Agents
Computer Science > Machine Learning arXiv:2603.22892 (cs) [Submitted on 24 Mar 2026] Title:VLGOR: Visual-Language Knowledge Guided Offline Reinforcement Learning for Generalizable Agents Authors:Pengsen Liu, Maosen Zeng, Nan Tang, Kaiyuan Li, Jing-Cheng Pang, Yunan Liu, Yang Yu View a PDF of the paper titled VLGOR: Visual-Language Knowledge Guided Offline Reinforcement Learning for Generalizable Agents, by Pengsen Liu and 6 other authors View PDF HTML (experimental) Abstract:Combining Large Language Models (LLMs) with Reinforcement Learning (RL) enables agents to interpret language instructions more effectively for task execution. However, LLMs typically lack direct perception of the physical environment, which limits their understanding of environmental dynamics and their ability to generalize to unseen tasks. To address this limitation, we propose Visual-Language Knowledge-Guided Offline Reinforcement Learning (VLGOR), a framework that integrates visual and language knowledge to generate imaginary rollouts, thereby enriching the interaction data. The core premise of VLGOR is to fine-tune a vision-language model to predict future states and actions conditioned on an initial visual observation and high-level instructions, ensuring that the generated rollouts remain temporally coherent and spatially plausible. Furthermore, we employ counterfactual prompts to produce more diverse rollouts for offline RL training, enabling the agent to acquire knowledge that facilitates follo...