[2603.02765] Next Embedding Prediction Makes World Models Stronger
About this article
Abstract page for arXiv paper 2603.02765: Next Embedding Prediction Makes World Models Stronger
Computer Science > Machine Learning arXiv:2603.02765 (cs) [Submitted on 3 Mar 2026] Title:Next Embedding Prediction Makes World Models Stronger Authors:George Bredis, Nikita Balagansky, Daniil Gavrilov, Ruslan Rakhimov View a PDF of the paper titled Next Embedding Prediction Makes World Models Stronger, by George Bredis and 3 other authors View PDF HTML (experimental) Abstract:Capturing temporal dependencies is critical for model-based reinforcement learning (MBRL) in partially observable, high-dimensional domains. We introduce NE-Dreamer, a decoder-free MBRL agent that leverages a temporal transformer to predict next-step encoder embeddings from latent state sequences, directly optimizing temporal predictive alignment in representation space. This approach enables NE-Dreamer to learn coherent, predictive state representations without reconstruction losses or auxiliary supervision. On the DeepMind Control Suite, NE-Dreamer matches or exceeds the performance of DreamerV3 and leading decoder-free agents. On a challenging subset of DMLab tasks involving memory and spatial reasoning, NE-Dreamer achieves substantial gains. These results establish next-embedding prediction with temporal transformers as an effective, scalable framework for MBRL in complex, partially observable environments. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.02765 [cs.LG] (or arXiv:2603.02765v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2603.0...