[2512.19735] Improving Fairness of Large Language Model-Based ICU Mortality Prediction via Case-Based Prompting
About this article
Abstract page for arXiv paper 2512.19735: Improving Fairness of Large Language Model-Based ICU Mortality Prediction via Case-Based Prompting
Computer Science > Machine Learning arXiv:2512.19735 (cs) [Submitted on 17 Dec 2025 (v1), last revised 23 Mar 2026 (this version, v3)] Title:Improving Fairness of Large Language Model-Based ICU Mortality Prediction via Case-Based Prompting Authors:Gangxiong Zhang, Yongchao Long, Yuxi Zhou, Yong Zhang, Shenda Hong View a PDF of the paper titled Improving Fairness of Large Language Model-Based ICU Mortality Prediction via Case-Based Prompting, by Gangxiong Zhang and Yongchao Long and Yuxi Zhou and Yong Zhang and Shenda Hong View PDF HTML (experimental) Abstract:Accurately predicting mortality risk in intensive care unit (ICU) patients is essential for clinical decision-making. Although large language models (LLMs) show strong potential in structured medical prediction tasks, their outputs may exhibit biases related to demographic attributes such as sex, age, and race, limiting their reliability in fairness-critical clinical settings. Existing debiasing methods often degrade predictive performance, making it difficult to balance fairness and accuracy. In this study, we systematically analyze fairness issues in LLM-based ICU mortality prediction and propose a clinically adaptive prompting framework that improves both performance and fairness without model retraining. We first design a multi-dimensional bias assessment scheme to identify subgroup disparities. Based on this, we introduce CAse Prompting (CAP), a training-free framework that integrates existing debiasing strategie...