[2602.08927] Online monotone density estimation and log-optimal calibration
About this article
Abstract page for arXiv paper 2602.08927: Online monotone density estimation and log-optimal calibration
Statistics > Machine Learning arXiv:2602.08927 (stat) [Submitted on 9 Feb 2026 (v1), last revised 30 Mar 2026 (this version, v2)] Title:Online monotone density estimation and log-optimal calibration Authors:Rohan Hore, Ruodu Wang, Aaditya Ramdas View a PDF of the paper titled Online monotone density estimation and log-optimal calibration, by Rohan Hore and 1 other authors View PDF HTML (experimental) Abstract:We study the problem of online monotone density estimation, where density estimators must be constructed in a predictable manner from sequentially observed data. We propose two online estimators: an online analogue of the classical Grenander estimator, and an expert aggregation estimator inspired by exponential weighting methods from the online learning literature. In the well-specified stochastic setting, where the underlying density is monotone, we show that the expected cumulative log-likelihood gap between the online estimators and the true density admits an $O(n^{1/3})$ bound. We further establish a $\sqrt{n\log{n}}$ pathwise regret bound for the expert aggregation estimator relative to the best offline monotone estimator chosen in hindsight, under minimal regularity assumptions on the observed sequence. As an application of independent interest, we show that the problem of constructing log-optimal p-to-e calibrators for sequential hypothesis testing can be formulated as an online monotone density estimation problem. We adapt the proposed estimators to build empi...