[2602.23296] Conformalized Neural Networks for Federated Uncertainty Quantification under Dual Heterogeneity

[2602.23296] Conformalized Neural Networks for Federated Uncertainty Quantification under Dual Heterogeneity

arXiv - Machine Learning 4 min read Article

Summary

This article presents FedWQ-CP, a novel approach to federated uncertainty quantification that addresses dual heterogeneity in data and model performance, enhancing reliability in federated learning systems.

Why It Matters

As federated learning becomes increasingly prevalent, ensuring the reliability of uncertainty quantification is crucial for deploying effective AI models. This research addresses a significant gap in existing methods by integrating data and model heterogeneity, which can lead to better performance and reduced risks of local failures in AI applications.

Key Takeaways

  • FedWQ-CP balances empirical coverage performance with efficiency in federated learning.
  • The approach allows for agent-server calibration in a single communication round.
  • It maintains both agent-wise and global coverage while minimizing prediction intervals.
  • Experimental results demonstrate effectiveness across various datasets.
  • Addresses the joint effects of data and model heterogeneity in uncertainty quantification.

Computer Science > Machine Learning arXiv:2602.23296 (cs) [Submitted on 26 Feb 2026] Title:Conformalized Neural Networks for Federated Uncertainty Quantification under Dual Heterogeneity Authors:Quang-Huy Nguyen, Jiaqi Wang, Wei-Shinn Ku View a PDF of the paper titled Conformalized Neural Networks for Federated Uncertainty Quantification under Dual Heterogeneity, by Quang-Huy Nguyen and Jiaqi Wang and Wei-Shinn Ku View PDF HTML (experimental) Abstract:Federated learning (FL) faces challenges in uncertainty quantification (UQ). Without reliable UQ, FL systems risk deploying overconfident models at under-resourced agents, leading to silent local failures despite seemingly satisfactory global performance. Existing federated UQ approaches often address data heterogeneity or model heterogeneity in isolation, overlooking their joint effect on coverage reliability across agents. Conformal prediction is a widely used distribution-free UQ framework, yet its applications in heterogeneous FL settings remains underexplored. We provide FedWQ-CP, a simple yet effective approach that balances empirical coverage performance with efficiency at both global and agent levels under the dual heterogeneity. FedWQ-CP performs agent-server calibration in a single communication round. On each agent, conformity scores are computed on calibration data and a local quantile threshold is derived. Each agent then transmits only its quantile threshold and calibration sample size to the server. The server ...

Related Articles

Llms

[R] Depth-first pruning transfers: GPT-2 → TinyLlama with stable gains and minimal loss

TL;DR: Removing the right layers (instead of shrinking all layers) makes transformer models ~8–12% smaller with only ~6–8% quality loss, ...

Reddit - Machine Learning · 1 min ·
Llms

Built a training stability monitor that detects instability before your loss curve shows anything — open sourced the core today

Been working on a weight divergence trajectory curvature approach to detecting neural network training instability. Treats weight updates...

Reddit - Artificial Intelligence · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime