[2603.18532] Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds
About this article
Abstract page for arXiv paper 2603.18532: Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds
Computer Science > Robotics arXiv:2603.18532 (cs) [Submitted on 19 Mar 2026 (v1), last revised 28 Mar 2026 (this version, v2)] Title:Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds Authors:Andrew Choi, Xinjie Wang, Zhizhong Su, Wei Xu View a PDF of the paper titled Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds, by Andrew Choi and 3 other authors View PDF Abstract:The strong performance of large vision-language models (VLMs) trained with reinforcement learning (RL) has motivated similar approaches for fine-tuning vision-language-action (VLA) models in robotics. Many recent works fine-tune VLAs directly in the real world to avoid addressing the sim-to-real gap. While real-world RL circumvents sim-to-real issues, it inherently limits the generality of the resulting VLA, as scaling scene and object diversity in the physical world is prohibitively difficult. This leads to the paradoxical outcome of transforming a broadly pretrained model into an overfitted, scene-specific policy. Training in simulation can instead provide access to diverse scenes, but designing those scenes is also costly. In this work, we show that VLAs can be RL fine-tuned without sacrificing generality and with reduced labor by leveraging 3D world generative models. Using these models together with a language-driven scene designer, we generate hundreds of diverse interactive scenes containing unique objects and backgrounds, enabling sc...