[2603.00968] Learning with the Nash-Sutcliffe loss
About this article
Abstract page for arXiv paper 2603.00968: Learning with the Nash-Sutcliffe loss
Statistics > Machine Learning arXiv:2603.00968 (stat) [Submitted on 1 Mar 2026] Title:Learning with the Nash-Sutcliffe loss Authors:Hristos Tyralis, Georgia Papacharalampous View a PDF of the paper titled Learning with the Nash-Sutcliffe loss, by Hristos Tyralis and 1 other authors View PDF Abstract:The Nash-Sutcliffe efficiency ($\text{NSE}$) is a widely used, positively oriented relative measure for evaluating forecasts across multiple time series. However, it lacks a decision-theoretic foundation for this purpose. To address this, we examine its negatively oriented counterpart, which we refer to as Nash-Sutcliffe loss, defined as $L_{\text{NS}} = 1 - \text{NSE}$. We prove that $L_{\text{NS}}$ is strictly consistent for an elicitable and identifiable multi-dimensional functional, which we name the Nash-Sutcliffe functional. This functional is a data-weighted component-wise mean. The common practice of maximizing the average NSE across multiple series is the sample analog of minimizing the expected $L_{\text{NS}}$. Consequently, this operation implicitly assumes that all series originate from a single non-stationary, stochastic process. We introduce Nash-Sutcliffe linear regression, a multi-dimensional model estimated by minimizing the average $L_{\text{NS}}$, which reduces to a data-weighted least squares formulation. By reorienting the sample average loss function, we extend the previously proposed evaluation and estimation framework to forecasting multiple stationary d...