[2603.22219] Noise Titration: Exact Distributional Benchmarking for Probabilistic Time Series Forecasting
About this article
Abstract page for arXiv paper 2603.22219: Noise Titration: Exact Distributional Benchmarking for Probabilistic Time Series Forecasting
Computer Science > Machine Learning arXiv:2603.22219 (cs) [Submitted on 23 Mar 2026] Title:Noise Titration: Exact Distributional Benchmarking for Probabilistic Time Series Forecasting Authors:Qilin Wang View a PDF of the paper titled Noise Titration: Exact Distributional Benchmarking for Probabilistic Time Series Forecasting, by Qilin Wang View PDF HTML (experimental) Abstract:Modern time series forecasting is evaluated almost entirely through passive observation of single historical trajectories, rendering claims about a model's robustness to non-stationarity fundamentally unfalsifiable. We propose a paradigm shift toward interventionist, exact-statistical benchmarking. By systematically titrating calibrated Gaussian observation noise into known chaotic and stochastic dynamical systems, we transform forecasting from a black-box sequence matching game into an exact distributional inference task. Because the underlying data-generating process and noise variance are mathematically explicit, evaluation can rely on exact negative log-likelihoods and calibrated distributional tests rather than heuristic approximations. To fully leverage this framework, we extend the Fern architecture into a probabilistic generative model that natively parameterizes the Symmetric Positive Definite (SPD) cone, outputting calibrated joint covariance structures without the computational bottleneck of generic Jacobian modeling. Under this rigorous evaluation, we find that state-of-the-art zero-shot ...