[2503.08751] Disentangled World Models: Learning to Transfer Semantic Knowledge from Distracting Videos for Reinforcement Learning

[2503.08751] Disentangled World Models: Learning to Transfer Semantic Knowledge from Distracting Videos for Reinforcement Learning

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2503.08751: Disentangled World Models: Learning to Transfer Semantic Knowledge from Distracting Videos for Reinforcement Learning

Computer Science > Computer Vision and Pattern Recognition arXiv:2503.08751 (cs) [Submitted on 11 Mar 2025 (v1), last revised 4 Apr 2026 (this version, v3)] Title:Disentangled World Models: Learning to Transfer Semantic Knowledge from Distracting Videos for Reinforcement Learning Authors:Qi Wang, Zhipeng Zhang, Baao Xie, Xin Jin, Yunbo Wang, Shiyu Wang, Liaomo Zheng, Xiaokang Yang, Wenjun Zeng View a PDF of the paper titled Disentangled World Models: Learning to Transfer Semantic Knowledge from Distracting Videos for Reinforcement Learning, by Qi Wang and 8 other authors View PDF HTML (experimental) Abstract:Training visual reinforcement learning (RL) in practical scenarios presents a significant challenge, $\textit{i.e.,}$ RL agents suffer from low sample efficiency in environments with variations. While various approaches have attempted to alleviate this issue by disentangled representation learning, these methods usually start learning from scratch without prior knowledge of the world. This paper, in contrast, tries to learn and understand underlying semantic variations from distracting videos via offline-to-online latent distillation and flexible disentanglement constraints. To enable effective cross-domain semantic knowledge transfer, we introduce an interpretable model-based RL framework, dubbed Disentangled World Models (DisWM). Specifically, we pretrain the action-free video prediction model offline with disentanglement regularization to extract semantic knowledge ...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Things I got wrong building a confidence evaluator for local LLMs [D]

I've been building **Autodidact**, a local-first AI agent framework. The central piece is a **confidence evaluator** - something that dec...

Reddit - Machine Learning · 1 min ·
Llms

I’m convinced 90% of you building "AI Agents" are just burning money on proxy providers. [D]

Seriously, I just audited my stack and realized I’m spending more on rotating residential proxies than I am on the actual Claude and Open...

Reddit - Machine Learning · 1 min ·
Machine Learning

I recently tested Gemma 4-31B locally and I was blown away with the intelligence/size ratio of this model. These papers show how they achieved such distillation capabilities.[R]

The secret sauce here is that the student model does not just try to guess the next token in a sentence, which is how most AI is trained....

Reddit - Machine Learning · 1 min ·
Llms

How do you test AI agents in production? The unpredictability is overwhelming.[D]

I’ve been in QA for almost a decade. My mental model for quality was always: given input X, assert output Y. Now I’m on a team that’s shi...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime