[2603.22281] ThinkJEPA: Empowering Latent World Models with Large Vision-Language Reasoning Model
About this article
Abstract page for arXiv paper 2603.22281: ThinkJEPA: Empowering Latent World Models with Large Vision-Language Reasoning Model
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.22281 (cs) [Submitted on 23 Mar 2026] Title:ThinkJEPA: Empowering Latent World Models with Large Vision-Language Reasoning Model Authors:Haichao Zhang, Yijiang Li, Shwai He, Tushar Nagarajan, Mingfei Chen, Jianglin Lu, Ang Li, Yun Fu View a PDF of the paper titled ThinkJEPA: Empowering Latent World Models with Large Vision-Language Reasoning Model, by Haichao Zhang and 7 other authors View PDF HTML (experimental) Abstract:Recent progress in latent world models (e.g., V-JEPA2) has shown promising capability in forecasting future world states from video observations. Nevertheless, dense prediction from a short observation window limits temporal context and can bias predictors toward local, low-level extrapolation, making it difficult to capture long-horizon semantics and reducing downstream utility. Vision--language models (VLMs), in contrast, provide strong semantic grounding and general knowledge by reasoning over uniformly sampled frames, but they are not ideal as standalone dense predictors due to compute-driven sparse sampling, a language-output bottleneck that compresses fine-grained interaction states into text-oriented representations, and a data-regime mismatch when adapting to small action-conditioned datasets. We propose a VLM-guided JEPA-style latent world modeling framework that combines dense-frame dynamics modeling with long-horizon semantic guidance via a dual-temporal pathway: a dense JEPA...