[2604.05324] A Theoretical Framework for Statistical Evaluability of Generative Models
About this article
Abstract page for arXiv paper 2604.05324: A Theoretical Framework for Statistical Evaluability of Generative Models
Computer Science > Machine Learning arXiv:2604.05324 (cs) [Submitted on 7 Apr 2026] Title:A Theoretical Framework for Statistical Evaluability of Generative Models Authors:Shashaank Aiyer, Yishay Mansour, Shay Moran, Han Shao View a PDF of the paper titled A Theoretical Framework for Statistical Evaluability of Generative Models, by Shashaank Aiyer and 3 other authors View PDF HTML (experimental) Abstract:Statistical evaluation aims to estimate the generalization performance of a model using held-out i.i.d.\ test data sampled from the ground-truth distribution. In supervised learning settings such as classification, performance metrics such as error rate are well-defined, and test error reliably approximates population error given sufficiently large datasets. In contrast, evaluation is more challenging for generative models due to their open-ended nature: it is unclear which metrics are appropriate and whether such metrics can be reliably evaluated from finite samples. In this work, we introduce a theoretical framework for evaluating generative models and establish evaluability results for commonly used metrics. We study two categories of metrics: test-based metrics, including integral probability metrics (IPMs), and Rényi divergences. We show that IPMs with respect to any bounded test class can be evaluated from finite samples up to multiplicative and additive approximation errors. Moreover, when the test class has finite fat-shattering dimension, IPMs can be evaluated wi...