[2603.23835] Beyond Consistency: Inference for the Relative risk functional in Deep Nonparametric Cox Models
About this article
Abstract page for arXiv paper 2603.23835: Beyond Consistency: Inference for the Relative risk functional in Deep Nonparametric Cox Models
Statistics > Machine Learning arXiv:2603.23835 (stat) [Submitted on 25 Mar 2026] Title:Beyond Consistency: Inference for the Relative risk functional in Deep Nonparametric Cox Models Authors:Sattwik Ghosal, Xuran Meng, Yi Li View a PDF of the paper titled Beyond Consistency: Inference for the Relative risk functional in Deep Nonparametric Cox Models, by Sattwik Ghosal and 1 other authors View PDF HTML (experimental) Abstract:There remain theoretical gaps in deep neural network estimators for the nonparametric Cox proportional hazards model. In particular, it is unclear how gradient-based optimization error propagates to population risk under partial likelihood, how pointwise bias can be controlled to permit valid inference, and how ensemble-based uncertainty quantification behaves under realistic variance decay regimes. We develop an asymptotic distribution theory for deep Cox estimators that addresses these issues. First, we establish nonasymptotic oracle inequalities for general trained networks that link in-sample optimization error to population risk without requiring the exact empirical risk optimizer. We then construct a structured neural parameterization that achieves infinity-norm approximation rates compatible with the oracle bound, yielding control of the pointwise bias. Under these conditions and using the Hajek--Hoeffding projection, we prove pointwise and multivariate asymptotic normality for subsampled ensemble estimators. We derive a range of subsample sizes...