[2603.23245] Neural ODE and SDE Models for Adaptation and Planning in Model-Based Reinforcement Learning

[2603.23245] Neural ODE and SDE Models for Adaptation and Planning in Model-Based Reinforcement Learning

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.23245: Neural ODE and SDE Models for Adaptation and Planning in Model-Based Reinforcement Learning

Computer Science > Machine Learning arXiv:2603.23245 (cs) [Submitted on 24 Mar 2026] Title:Neural ODE and SDE Models for Adaptation and Planning in Model-Based Reinforcement Learning Authors:Chao Han, Stefanos Ioannou, Luca Manneschi, T.J. Hayward, Michael Mangan, Aditya Gilra, Eleni Vasilaki View a PDF of the paper titled Neural ODE and SDE Models for Adaptation and Planning in Model-Based Reinforcement Learning, by Chao Han and 6 other authors View PDF HTML (experimental) Abstract:We investigate neural ordinary and stochastic differential equations (neural ODEs and SDEs) to model stochastic dynamics in fully and partially observed environments within a model-based reinforcement learning (RL) framework. Through a sequence of simulations, we show that neural SDEs more effectively capture the inherent stochasticity of transition dynamics, enabling high-performing policies with improved sample efficiency in challenging scenarios. We leverage neural ODEs and SDEs for efficient policy adaptation to changes in environment dynamics via inverse models, requiring only limited interactions with the new environment. To address partial observability, we introduce a latent SDE model that combines an ODE with a GAN-trained stochastic component in latent space. Policies derived from this model provide a strong baseline, outperforming or matching general model-based and model-free approaches across stochastic continuous-control benchmarks. This work demonstrates the applicability of acti...

Originally published on March 25, 2026. Curated by AI News.

Related Articles

[2604.01676] GPA: Learning GUI Process Automation from Demonstrations
Llms

[2604.01676] GPA: Learning GUI Process Automation from Demonstrations

Abstract page for arXiv paper 2604.01676: GPA: Learning GUI Process Automation from Demonstrations

arXiv - AI · 3 min ·
[2604.01413] Adaptive Stopping for Multi-Turn LLM Reasoning
Llms

[2604.01413] Adaptive Stopping for Multi-Turn LLM Reasoning

Abstract page for arXiv paper 2604.01413: Adaptive Stopping for Multi-Turn LLM Reasoning

arXiv - AI · 4 min ·
[2603.13842] Fine-tuning is Not Enough: A Parallel Framework for Collaborative Imitation and Reinforcement Learning in End-to-end Autonomous Driving
Machine Learning

[2603.13842] Fine-tuning is Not Enough: A Parallel Framework for Collaborative Imitation and Reinforcement Learning in End-to-end Autonomous Driving

Abstract page for arXiv paper 2603.13842: Fine-tuning is Not Enough: A Parallel Framework for Collaborative Imitation and Reinforcement L...

arXiv - AI · 4 min ·
[2603.12510] Red-Teaming Vision-Language-Action Models via Quality Diversity Prompt Generation for Robust Robot Policies
Machine Learning

[2603.12510] Red-Teaming Vision-Language-Action Models via Quality Diversity Prompt Generation for Robust Robot Policies

Abstract page for arXiv paper 2603.12510: Red-Teaming Vision-Language-Action Models via Quality Diversity Prompt Generation for Robust Ro...

arXiv - AI · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime