[2502.13024] Fragility-aware Classification for Understanding Risk and Improving Generalization
About this article
Abstract page for arXiv paper 2502.13024: Fragility-aware Classification for Understanding Risk and Improving Generalization
Computer Science > Machine Learning arXiv:2502.13024 (cs) [Submitted on 18 Feb 2025 (v1), last revised 2 Apr 2026 (this version, v2)] Title:Fragility-aware Classification for Understanding Risk and Improving Generalization Authors:Chen Yang, Zheng Cui, Daniel Zhuoyu Long, Jin Qi, Ruohan Zhan View a PDF of the paper titled Fragility-aware Classification for Understanding Risk and Improving Generalization, by Chen Yang and 4 other authors View PDF Abstract:Classification models play a central role in data-driven decision-making applications such as medical diagnosis, recommendation systems, and risk assessment. Traditional performance metrics, such as accuracy and AUC, focus on overall error rates but fail to account for the confidence of incorrect predictions, i.e., the risk of confident misjudgments. This limitation is particularly consequential in safety-critical and cost-sensitive settings, where overconfident errors can lead to severe outcomes. To address this issue, we propose the Fragility Index (FI), a novel performance metric that evaluates classifiers from a risk-averse perspective by capturing the tail risk of confident misjudgments. We formulate FI within a robust satisficing (RS) framework to ensure robustness under distributional uncertainty. Building on this, we develop a tractable training framework that directly targets FI via a surrogate loss, and show that models trained under this framework admit provable bounds on FI. We further derive exact reformulatio...