[2602.18396] PRISM-FCP: Byzantine-Resilient Federated Conformal Prediction via Partial Sharing

[2602.18396] PRISM-FCP: Byzantine-Resilient Federated Conformal Prediction via Partial Sharing

arXiv - Machine Learning 4 min read Article

Summary

The paper presents PRISM-FCP, a Byzantine-resilient framework for federated conformal prediction that enhances robustness against attacks by utilizing partial model sharing during training and calibration.

Why It Matters

As federated learning becomes increasingly prevalent, ensuring the integrity and reliability of models against Byzantine attacks is crucial. PRISM-FCP offers a novel approach that not only addresses vulnerabilities during calibration but also during training, making it a significant advancement in the field of machine learning.

Key Takeaways

  • PRISM-FCP improves robustness against Byzantine attacks during both training and calibration phases.
  • The framework utilizes partial model sharing to reduce the impact of adversarial updates.
  • Extensive experiments show that PRISM-FCP maintains nominal coverage guarantees while reducing communication costs.
  • The approach results in lower mean-square error (MSE) and tighter prediction intervals compared to standard methods.
  • This work contributes to the field of federated learning by enhancing uncertainty quantification.

Computer Science > Machine Learning arXiv:2602.18396 (cs) [Submitted on 20 Feb 2026] Title:PRISM-FCP: Byzantine-Resilient Federated Conformal Prediction via Partial Sharing Authors:Ehsan Lari, Reza Arablouei, Stefan Werner View a PDF of the paper titled PRISM-FCP: Byzantine-Resilient Federated Conformal Prediction via Partial Sharing, by Ehsan Lari and 2 other authors View PDF HTML (experimental) Abstract:We propose PRISM-FCP (Partial shaRing and robust calIbration with Statistical Margins for Federated Conformal Prediction), a Byzantine-resilient federated conformal prediction framework that utilizes partial model sharing to improve robustness against Byzantine attacks during both model training and conformal calibration. Existing approaches address adversarial behavior only in the calibration stage, leaving the learned model susceptible to poisoned updates. In contrast, PRISM-FCP mitigates attacks end-to-end. During training, clients partially share updates by transmitting only $M$ of $D$ parameters per round. This attenuates the expected energy of an adversary's perturbation in the aggregated update by a factor of $M/D$, yielding lower mean-square error (MSE) and tighter prediction intervals. During calibration, clients convert nonconformity scores into characterization vectors, compute distance-based maliciousness scores, and downweight or filter suspected Byzantine contributions before estimating the conformal quantile. Extensive experiments on both synthetic data and...

Related Articles

Machine Learning

[D] ICML Rebuttal Question

I am currently working on my response on the rebuttal acknowledgments for ICML and I doubting how to handle the strawman argument of that...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ML researcher looking to switch to a product company.

Hey, I am an AI researcher currently working in a deep tech company as a data scientist. Prior to this, I was doing my PhD. My current ro...

Reddit - Machine Learning · 1 min ·
Machine Learning

Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]

Hey guys, I’m the same creator of Netryx V2, the geolocation tool. I’ve been working on something new called COGNEX. It learns how a pers...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] bitnet-edge: Ternary-weight CNNs ({-1,0,+1}) on MNIST and CIFAR-10, deployed to ESP32-S3 with zero multiplications

I built a pipeline that takes ternary-quantized CNNs from PyTorch training all the way to bare-metal inference on an ESP32-S3 microcontro...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime