[2603.27871] Statistical Guarantees for Distributionally Robust Optimization with Optimal Transport and OT-Regularized Divergences
About this article
Abstract page for arXiv paper 2603.27871: Statistical Guarantees for Distributionally Robust Optimization with Optimal Transport and OT-Regularized Divergences
Statistics > Machine Learning arXiv:2603.27871 (stat) [Submitted on 29 Mar 2026] Title:Statistical Guarantees for Distributionally Robust Optimization with Optimal Transport and OT-Regularized Divergences Authors:Jeremiah Birrell, Xiaoxi Shen View a PDF of the paper titled Statistical Guarantees for Distributionally Robust Optimization with Optimal Transport and OT-Regularized Divergences, by Jeremiah Birrell and 1 other authors View PDF HTML (experimental) Abstract:We study finite-sample statistical performance guarantees for distributionally robust optimization (DRO) with optimal transport (OT) and OT-regularized divergence model neighborhoods. Specifically, we derive concentration inequalities for supervised learning via DRO-based adversarial training, as commonly employed to enhance the adversarial robustness of machine learning models. Our results apply to a wide range of OT cost functions, beyond the $p$-Wasserstein case studied by previous authors. In particular, our results are the first to: 1) cover soft-constraint norm-ball OT cost functions; soft-constraint costs have been shown empirically to enhance robustness when used in adversarial training, 2) apply to the combination of adversarial sample generation and adversarial reweighting that is induced by using OT-regularized $f$-divergence model neighborhoods; the added reweighting mechanism has also been shown empirically to further improve performance. In addition, even in the $p$-Wasserstein case, our bounds ex...