[2512.23138] Why Machine Learning Models Systematically Underestimate Extreme Values II: How to Fix It with LatentNN
About this article
Abstract page for arXiv paper 2512.23138: Why Machine Learning Models Systematically Underestimate Extreme Values II: How to Fix It with LatentNN
Astrophysics > Instrumentation and Methods for Astrophysics arXiv:2512.23138 (astro-ph) [Submitted on 29 Dec 2025 (v1), last revised 25 Mar 2026 (this version, v2)] Title:Why Machine Learning Models Systematically Underestimate Extreme Values II: How to Fix It with LatentNN Authors:Yuan-Sen Ting View a PDF of the paper titled Why Machine Learning Models Systematically Underestimate Extreme Values II: How to Fix It with LatentNN, by Yuan-Sen Ting View PDF HTML (experimental) Abstract:Attenuation bias -- the systematic underestimation of regression coefficients due to measurement errors in input variables -- affects astronomical data-driven models. For linear regression, this problem was solved by treating the true input values as latent variables to be estimated alongside model parameters. In this paper, we show that neural networks suffer from the same attenuation bias and that the latent variable solution generalizes directly to neural networks. We introduce LatentNN, a method that jointly optimizes network parameters and latent input values by maximizing the joint likelihood of observing both inputs and outputs. We demonstrate the correction on one-dimensional regression, multivariate inputs with correlated features, and stellar spectroscopy applications. LatentNN reduces attenuation bias across a range of signal-to-noise ratios where standard neural networks show large bias. This provides a framework for improved neural network inference in the low signal-to-noise regim...