[2604.02035] Reinforcement Learning for Speculative Trading under Exploratory Framework
About this article
Abstract page for arXiv paper 2604.02035: Reinforcement Learning for Speculative Trading under Exploratory Framework
Quantitative Finance > Mathematical Finance arXiv:2604.02035 (q-fin) [Submitted on 2 Apr 2026] Title:Reinforcement Learning for Speculative Trading under Exploratory Framework Authors:Yun Zhao, Alex S.L. Tse, Harry Zheng View a PDF of the paper titled Reinforcement Learning for Speculative Trading under Exploratory Framework, by Yun Zhao and 2 other authors View PDF HTML (experimental) Abstract:We study a speculative trading problem within the exploratory reinforcement learning (RL) framework of Wang et al. [2020]. The problem is formulated as a sequential optimal stopping problem over entry and exit times under general utility function and price process. We first consider a relaxed version of the problem in which the stopping times are modeled by the jump times of Cox processes driven by bounded, non-randomized intensity controls. Under the exploratory formulation, the agent's randomized control is characterized via the probability measure over the jump intensities, and their objective function is regularized by Shannon's differential entropy. This yields a system of the exploratory HJB equations and Gibbs distributions in closed-form as the optimal policy. Error estimates and convergence of the RL objective to the value function of the original problem are established. Finally, an RL algorithm is designed, and its implementation is showcased in a pairs-trading application. Comments: Subjects: Mathematical Finance (q-fin.MF); Machine Learning (cs.LG); Optimization and Con...