[2511.07732] ViPRA: Video Prediction for Robot Actions
About this article
Abstract page for arXiv paper 2511.07732: ViPRA: Video Prediction for Robot Actions
Computer Science > Robotics arXiv:2511.07732 (cs) [Submitted on 11 Nov 2025 (v1), last revised 30 Mar 2026 (this version, v2)] Title:ViPRA: Video Prediction for Robot Actions Authors:Sandeep Routray, Hengkai Pan, Unnat Jain, Shikhar Bahl, Deepak Pathak View a PDF of the paper titled ViPRA: Video Prediction for Robot Actions, by Sandeep Routray and 4 other authors View PDF Abstract:Can we turn a video prediction model into a robot policy? Videos, including those of humans or teleoperated robots, capture rich physical interactions. However, most of them lack labeled actions, which limits their use in robot learning. We present Video Prediction for Robot Actions (ViPRA), a simple pretraining-finetuning framework that learns continuous robot control from these actionless videos. Instead of directly predicting actions, we train a video-language model to predict both future visual observations and motion-centric latent actions, which serve as intermediate representations of scene dynamics. We train these latent actions using perceptual losses and optical flow consistency to ensure they reflect physically grounded behavior. For downstream control, we introduce a chunked flow matching decoder that maps latent actions to robot-specific continuous action sequences, using only 100 to 200 teleoperated demonstrations. This approach avoids expensive action annotation, supports generalization across embodiments, and enables smooth, high-frequency continuous control upto 22 Hz via chunked...