[2508.16815] Uncertainty Propagation Networks for Neural Ordinary Differential Equations
Summary
The paper presents Uncertainty Propagation Networks (UPN), a novel approach to neural ordinary differential equations that integrates uncertainty quantification into continuous-time modeling, enhancing predictions in various domains.
Why It Matters
This research addresses a critical gap in neural ODEs by incorporating uncertainty quantification, which is essential for applications requiring reliable predictions, such as time-series forecasting and trajectory prediction. The ability to model uncertainty can significantly improve decision-making in fields like finance, robotics, and environmental science.
Key Takeaways
- UPN models both state evolution and uncertainty simultaneously.
- It efficiently propagates uncertainty through nonlinear dynamics.
- The architecture adapts its evaluation strategy based on input complexity.
- Experimental results show UPN's effectiveness in various applications.
- UPN can handle irregularly-sampled observations naturally.
Computer Science > Machine Learning arXiv:2508.16815 (cs) [Submitted on 22 Aug 2025 (v1), last revised 24 Feb 2026 (this version, v2)] Title:Uncertainty Propagation Networks for Neural Ordinary Differential Equations Authors:Hadi Jahanshahi, Zheng H. Zhu View a PDF of the paper titled Uncertainty Propagation Networks for Neural Ordinary Differential Equations, by Hadi Jahanshahi and 1 other authors View PDF Abstract:This paper introduces Uncertainty Propagation Network (UPN), a novel family of neural differential equations that naturally incorporate uncertainty quantification into continuous-time modeling. Unlike existing neural ODEs that predict only state trajectories, UPN simultaneously model both state evolution and its associated uncertainty by parameterizing coupled differential equations for mean and covariance dynamics. The architecture efficiently propagates uncertainty through nonlinear dynamics without discretization artifacts by solving coupled ODEs for state and covariance evolution while enabling state-dependent, learnable process noise. The continuous-depth formulation adapts its evaluation strategy to each input's complexity, provides principled uncertainty quantification, and handles irregularly-sampled observations naturally. Experimental results demonstrate UPN's effectiveness across multiple domains: continuous normalizing flows (CNFs) with uncertainty quantification, time-series forecasting with well-calibrated confidence intervals, and robust trajecto...