[2603.18202] R2-Dreamer: Redundancy-Reduced World Models without Decoders or Augmentation
About this article
Abstract page for arXiv paper 2603.18202: R2-Dreamer: Redundancy-Reduced World Models without Decoders or Augmentation
Computer Science > Machine Learning arXiv:2603.18202 (cs) [Submitted on 18 Mar 2026 (v1), last revised 20 Mar 2026 (this version, v2)] Title:R2-Dreamer: Redundancy-Reduced World Models without Decoders or Augmentation Authors:Naoki Morihira, Amal Nahar, Kartik Bharadwaj, Yasuhiro Kato, Akinobu Hayashi, Tatsuya Harada View a PDF of the paper titled R2-Dreamer: Redundancy-Reduced World Models without Decoders or Augmentation, by Naoki Morihira and 5 other authors View PDF HTML (experimental) Abstract:A central challenge in image-based Model-Based Reinforcement Learning (MBRL) is to learn representations that distill essential information from irrelevant visual details. While promising, reconstruction-based methods often waste capacity on large task-irrelevant regions. Decoder-free methods instead learn robust representations by leveraging Data Augmentation (DA), but reliance on such external regularizers limits versatility. We propose R2-Dreamer, a decoder-free MBRL framework with a self-supervised objective that serves as an internal regularizer, preventing representation collapse without resorting to DA. The core of our method is a redundancy-reduction objective inspired by Barlow Twins, which can be easily integrated into existing frameworks. On DeepMind Control Suite and Meta-World, R2-Dreamer is competitive with strong baselines such as DreamerV3 and TD-MPC2 while training 1.59x faster than DreamerV3, and yields substantial gains on DMC-Subtle with tiny task-relevant ob...