[2603.01337] Adaptive Estimation and Inference in Conditional Moment Models via the Discrepancy Principle
About this article
Abstract page for arXiv paper 2603.01337: Adaptive Estimation and Inference in Conditional Moment Models via the Discrepancy Principle
Statistics > Machine Learning arXiv:2603.01337 (stat) [Submitted on 2 Mar 2026] Title:Adaptive Estimation and Inference in Conditional Moment Models via the Discrepancy Principle Authors:Jiyuan Tan, Vasilis Syrgkanis View a PDF of the paper titled Adaptive Estimation and Inference in Conditional Moment Models via the Discrepancy Principle, by Jiyuan Tan and 1 other authors View PDF HTML (experimental) Abstract:We study adaptive estimation and inference in ill-posed linear inverse problems defined by conditional moment restrictions. Existing regularized estimators such as Regularized DeepIV (RDIV) require prior knowledge of the smoothness of the nuisance function, typically encoded by a beta source condition to tune their regularization parameters. In practice, this smoothness is unknown, and misspecified hyperparameters can lead to suboptimal convergence or instability. We introduce a discrepancy-principle-based framework for adaptive hyperparameter selection that automatically balances bias and variance without relying on the unknown smoothness parameter. Our framework applies to both RDIV (Li et al. [2024]) and the Tikhonov Regularized Adversarial Estimator (TRAE) (Bennett et al. [2023a]) and achieves the same rates in both weak and strong metrics. Building on this, we construct a fully adaptive doubly robust estimator for linear functionals that attains the optimal rate of the better-conditioned primal or dual problem, providing a practical, theoretically grounded appro...