[2603.21656] TrustFed: Enabling Trustworthy Medical AI under Data Privacy Constraints
About this article
Abstract page for arXiv paper 2603.21656: TrustFed: Enabling Trustworthy Medical AI under Data Privacy Constraints
Computer Science > Machine Learning arXiv:2603.21656 (cs) [Submitted on 23 Mar 2026] Title:TrustFed: Enabling Trustworthy Medical AI under Data Privacy Constraints Authors:Vagish Kumar, Syed Bahauddin Alam, Souvik Chakraborty View a PDF of the paper titled TrustFed: Enabling Trustworthy Medical AI under Data Privacy Constraints, by Vagish Kumar and Syed Bahauddin Alam and Souvik Chakraborty View PDF HTML (experimental) Abstract:Protecting patient privacy remains a fundamental barrier to scaling machine learning across healthcare institutions, where centralizing sensitive data is often infeasible due to ethical, legal, and regulatory constraints. Federated learning offers a promising alternative by enabling privacy-preserving, multi-institutional training without sharing raw patient data; however, real-world deployments face severe challenges from data heterogeneity, site-specific biases, and class imbalance, which degrade predictive reliability and render existing uncertainty quantification methods ineffective. Here, we present TrustFed, a federated uncertainty quantification framework that provides distribution-free, finite-sample coverage guarantees under heterogeneous and imbalanced healthcare data, without requiring centralized access. TrustFed introduces a representation-aware client assignment mechanism that leverages internal model representations to enable effective calibration across institutions, along with a soft-nearest threshold aggregation strategy that mitig...