[2512.19131] Evidential Trust-Aware Model Personalization in Decentralized Federated Learning for Wearable IoT
Summary
The paper presents Murmura, a framework for trust-aware model personalization in decentralized federated learning (DFL) for wearable IoT devices, addressing challenges of statistical heterogeneity in local data.
Why It Matters
This research is significant as it tackles the critical issue of model personalization in decentralized environments, which is increasingly relevant in the context of IoT devices. By enhancing collaboration among devices while ensuring model accuracy, it opens pathways for more effective machine learning applications in real-world scenarios.
Key Takeaways
- Murmura leverages evidential deep learning to enhance peer compatibility in DFL.
- The framework reduces performance degradation from IID to non-IID conditions significantly.
- It achieves faster convergence and stable accuracy across various hyperparameters.
- Evidential uncertainty serves as a foundation for compatibility-aware personalization.
- The approach is particularly beneficial for wearable IoT applications.
Computer Science > Distributed, Parallel, and Cluster Computing arXiv:2512.19131 (cs) [Submitted on 22 Dec 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Evidential Trust-Aware Model Personalization in Decentralized Federated Learning for Wearable IoT Authors:Murtaza Rangwala, Richard O. Sinnott, Rajkumar Buyya View a PDF of the paper titled Evidential Trust-Aware Model Personalization in Decentralized Federated Learning for Wearable IoT, by Murtaza Rangwala and 2 other authors View PDF HTML (experimental) Abstract:Decentralized federated learning (DFL) enables collaborative model training across edge devices without centralized coordination, offering resilience against single points of failure. However, statistical heterogeneity arising from non-identically distributed local data creates a fundamental challenge: nodes must learn personalized models adapted to their local distributions while selectively collaborating with compatible peers. Existing approaches either enforce a single global model that fits no one well, or rely on heuristic peer selection mechanisms that cannot distinguish between peers with genuinely incompatible data distributions and those with valuable complementary knowledge. We present Murmura, a framework that leverages evidential deep learning to enable trust-aware model personalization in DFL. Our key insight is that epistemic uncertainty from Dirichlet-based evidential models directly indicates peer compatibility: high epistemic unce...