[2510.15165] Policy Transfer for Continuous-Time Reinforcement Learning: A (Rough) Differential Equation Approach

[2510.15165] Policy Transfer for Continuous-Time Reinforcement Learning: A (Rough) Differential Equation Approach

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2510.15165: Policy Transfer for Continuous-Time Reinforcement Learning: A (Rough) Differential Equation Approach

Computer Science > Machine Learning arXiv:2510.15165 (cs) [Submitted on 16 Oct 2025 (v1), last revised 3 Mar 2026 (this version, v3)] Title:Policy Transfer for Continuous-Time Reinforcement Learning: A (Rough) Differential Equation Approach Authors:Xin Guo, Zijiu Lyu View a PDF of the paper titled Policy Transfer for Continuous-Time Reinforcement Learning: A (Rough) Differential Equation Approach, by Xin Guo and 1 other authors View PDF HTML (experimental) Abstract:This paper studies policy transfer, one of the well-known transfer learning techniques adopted in large language models, for continuous-time reinforcement learning problems. In the case of continuous-time linear-quadratic systems with Shannon's entropy regularization, we fully exploit the Gaussian structure of their optimal policy and the stability of their associated Riccati equations. In the general case where the system has possibly non-linear and bounded dynamics, the key technical component is the stability of diffusion SDEs which is established by invoking the rough path theory. Our work provides the first theoretical proof of policy transfer for continuous-time RL: an optimal policy learned for one RL problem can be used to initialize to search for a near-optimal policy for another closely related RL problem, while achieving (at least) the same rate of convergence for the original algorithm. As a byproduct of our analysis, we derive the stability of a concrete class of continuous-time score-based diffusio...

Originally published on March 04, 2026. Curated by AI News.

Related Articles

Llms

I probably shouldn't be impressed, but I am.

So I just made this workout on a whiteboard and I was feeling lazy so I asked Claude to read it. And it did, almost flawlessly. I was and...

Reddit - Artificial Intelligence · 1 min ·
Llms

Claude Vulnerabilities but Solvable

I recognized that while I was using Claude that the inputs and decision making of the AI has perception of worry and concern for the user...

Reddit - Artificial Intelligence · 1 min ·
Llms

OpenAI & Anthropic’s CEOs Wouldn't Hold Hands, but Their Models Fell in Love In An LLM Dating Show

People ask AI relationship questions all the time, from "Does this person like me?" to "Should I text back?" But have you ever thought ab...

Reddit - Artificial Intelligence · 1 min ·
Llms

A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence.

SmolLM2 135M. Lenovo T14 CPU. No GPU. No RLHF. No BPE. Coherent, non-sycophantic, contextually appropriate output. First message. No prio...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime