[2510.13868] DeepMartingale: Duality of the Optimal Stopping Problem with Expressivity and High-Dimensional Hedging
Summary
The paper introduces DeepMartingale, a deep-learning framework addressing the dual formulation of optimal stopping problems, enhancing scalability and expressivity in high-dimensional hedging scenarios.
Why It Matters
DeepMartingale offers a novel approach to optimizing high-dimensional financial models without the usual computational burdens. This advancement is crucial for practitioners in finance and machine learning, as it enhances the efficiency of hedging strategies and numerical computations in complex scenarios.
Key Takeaways
- DeepMartingale optimizes dual formulations of optimal stopping problems using a deep-learning framework.
- The method avoids the curse of dimensionality, making it scalable for high-dimensional settings.
- It provides tight dual upper bounds for value functions without requiring primal information.
- Numerical experiments validate the framework's effectiveness in hedging strategies.
- The expressivity theorem ensures accurate approximations of true value functions.
Mathematics > Optimization and Control arXiv:2510.13868 (math) [Submitted on 13 Oct 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:DeepMartingale: Duality of the Optimal Stopping Problem with Expressivity and High-Dimensional Hedging Authors:Junyan Ye, Hoi Ying Wong View a PDF of the paper titled DeepMartingale: Duality of the Optimal Stopping Problem with Expressivity and High-Dimensional Hedging, by Junyan Ye and 1 other authors View PDF Abstract:We propose \textit{DeepMartingale}, a deep-learning framework for the dual formulation of discrete-monitoring optimal stopping problems under continuous-time models. Leveraging a martingale representation, our method implements a \emph{pure-dual} procedure that directly optimizes over a parameterized class of martingales, producing computable and tight \emph{dual upper bounds} for the value function in high-dimensional settings without requiring any primal information or Snell-envelope approximation. We prove convergence of the resulting upper bounds under mild assumptions for both first- and second-moment losses. A key contribution is an expressivity theorem showing that \textit{DeepMartingale} can approximate the true value function to any prescribed accuracy $\varepsilon$ using neural networks of size at most $\tilde{c} d^{\tilde{q}}\varepsilon^{-\tilde{r}}$, with constants independent of the dimension $d$ and accuracy $\varepsilon$, thereby avoiding the curse of dimensionality. Since expressivity in this setti...