[2603.04431] Uncertainty-Calibrated Spatiotemporal Field Diffusion with Sparse Supervision
About this article
Abstract page for arXiv paper 2603.04431: Uncertainty-Calibrated Spatiotemporal Field Diffusion with Sparse Supervision
Computer Science > Machine Learning arXiv:2603.04431 (cs) [Submitted on 17 Feb 2026] Title:Uncertainty-Calibrated Spatiotemporal Field Diffusion with Sparse Supervision Authors:Kevin Valencia, Xihaier Luo, Shinjae Yoo, David Keetae Park View a PDF of the paper titled Uncertainty-Calibrated Spatiotemporal Field Diffusion with Sparse Supervision, by Kevin Valencia and 3 other authors View PDF HTML (experimental) Abstract:Physical fields are typically observed only at sparse, time-varying sensor locations, making forecasting and reconstruction ill-posed and uncertainty-critical. We present SOLID, a mask-conditioned diffusion framework that learns spatiotemporal dynamics from sparse observations alone: training and evaluation use only observed target locations, requiring no dense fields and no pre-imputation. Unlike prior work that trains on dense reanalysis or simulations and only tests under sparsity, SOLID is trained end-to-end with sparse supervision only. SOLID conditions each denoising step on the measured values and their locations, and introduces a dual-masking objective that (i) emphasizes learning in unobserved void regions while (ii) upweights overlap pixels where inputs and targets provide the most reliable anchors. This strict sparse-conditioning pathway enables posterior sampling of full fields consistent with the measurements, achieving up to an order-of-magnitude improvement in probabilistic error and yielding calibrated uncertainty maps (\r{ho} > 0.7) under se...