[2407.10417] Proper losses regret at least 1/2-order
About this article
Abstract page for arXiv paper 2407.10417: Proper losses regret at least 1/2-order
Statistics > Machine Learning arXiv:2407.10417 (stat) [Submitted on 15 Jul 2024 (v1), last revised 3 Mar 2026 (this version, v2)] Title:Proper losses regret at least 1/2-order Authors:Han Bao, Asuka Takatsu View a PDF of the paper titled Proper losses regret at least 1/2-order, by Han Bao and 1 other authors View PDF Abstract:A fundamental challenge in machine learning is the choice of a loss as it characterizes our learning task, is minimized in the training phase, and serves as an evaluation criterion for estimators. Proper losses are commonly chosen, ensuring minimizers of the full risk match the true probability vector. Estimators induced from a proper loss are widely used to construct forecasters for downstream tasks such as classification and ranking. In this procedure, how does the forecaster based on the obtained estimator perform well under a given downstream task? This question is substantially relevant to the behavior of the $p$-norm between the estimated and true probability vectors when the estimator is updated. In the proper loss framework, the suboptimality of the estimated probability vector from the true probability vector is measured by a surrogate regret. First, we analyze a surrogate regret and show that the strict properness of a loss is necessary and sufficient to establish a non-vacuous surrogate regret bound. Second, we solve an important open question that the order of convergence in p-norm cannot be faster than the $1/2$-order of surrogate regrets...