[2602.18396] PRISM-FCP: Byzantine-Resilient Federated Conformal Prediction via Partial Sharing
Summary
The paper presents PRISM-FCP, a Byzantine-resilient framework for federated conformal prediction that enhances robustness against attacks by utilizing partial model sharing during training and calibration.
Why It Matters
As federated learning becomes increasingly prevalent, ensuring the integrity and reliability of models against Byzantine attacks is crucial. PRISM-FCP offers a novel approach that not only addresses vulnerabilities during calibration but also during training, making it a significant advancement in the field of machine learning.
Key Takeaways
- PRISM-FCP improves robustness against Byzantine attacks during both training and calibration phases.
- The framework utilizes partial model sharing to reduce the impact of adversarial updates.
- Extensive experiments show that PRISM-FCP maintains nominal coverage guarantees while reducing communication costs.
- The approach results in lower mean-square error (MSE) and tighter prediction intervals compared to standard methods.
- This work contributes to the field of federated learning by enhancing uncertainty quantification.
Computer Science > Machine Learning arXiv:2602.18396 (cs) [Submitted on 20 Feb 2026] Title:PRISM-FCP: Byzantine-Resilient Federated Conformal Prediction via Partial Sharing Authors:Ehsan Lari, Reza Arablouei, Stefan Werner View a PDF of the paper titled PRISM-FCP: Byzantine-Resilient Federated Conformal Prediction via Partial Sharing, by Ehsan Lari and 2 other authors View PDF HTML (experimental) Abstract:We propose PRISM-FCP (Partial shaRing and robust calIbration with Statistical Margins for Federated Conformal Prediction), a Byzantine-resilient federated conformal prediction framework that utilizes partial model sharing to improve robustness against Byzantine attacks during both model training and conformal calibration. Existing approaches address adversarial behavior only in the calibration stage, leaving the learned model susceptible to poisoned updates. In contrast, PRISM-FCP mitigates attacks end-to-end. During training, clients partially share updates by transmitting only $M$ of $D$ parameters per round. This attenuates the expected energy of an adversary's perturbation in the aggregated update by a factor of $M/D$, yielding lower mean-square error (MSE) and tighter prediction intervals. During calibration, clients convert nonconformity scores into characterization vectors, compute distance-based maliciousness scores, and downweight or filter suspected Byzantine contributions before estimating the conformal quantile. Extensive experiments on both synthetic data and...