[2602.21721] Private and Robust Contribution Evaluation in Federated Learning
Summary
This paper presents novel methods for evaluating contributions in federated learning while ensuring privacy and robustness, addressing vulnerabilities in existing approaches.
Why It Matters
As federated learning gains traction, ensuring fair and secure contribution evaluation is crucial for collaborative machine learning. This research introduces methods that balance privacy with effective performance evaluation, which is essential for real-world applications, particularly in sensitive fields like healthcare.
Key Takeaways
- Introduces two new contribution evaluation methods compatible with secure aggregation.
- Methods ensure fairness, privacy, and robustness in federated learning contexts.
- Demonstrates improved performance over existing evaluation baselines in practical applications.
- Addresses vulnerabilities in current marginal-contribution methods.
- Provides theoretical guarantees for the proposed methods' effectiveness.
Computer Science > Cryptography and Security arXiv:2602.21721 (cs) [Submitted on 25 Feb 2026] Title:Private and Robust Contribution Evaluation in Federated Learning Authors:Delio Jaramillo Velez, Gergely Biczok, Alexandre Graell i Amat, Johan Ostman, Balazs Pejo View a PDF of the paper titled Private and Robust Contribution Evaluation in Federated Learning, by Delio Jaramillo Velez and 4 other authors View PDF Abstract:Cross-silo federated learning allows multiple organizations to collaboratively train machine learning models without sharing raw data, but client updates can still leak sensitive information through inference attacks. Secure aggregation protects privacy by hiding individual updates, yet it complicates contribution evaluation, which is critical for fair rewards and detecting low-quality or malicious participants. Existing marginal-contribution methods, such as the Shapley value, are incompatible with secure aggregation, and practical alternatives, such as Leave-One-Out, are crude and rely on self-evaluation. We introduce two marginal-difference contribution scores compatible with secure aggregation. Fair-Private satisfies standard fairness axioms, while Everybody-Else eliminates self-evaluation and provides resistance to manipulation, addressing a largely overlooked vulnerability. We provide theoretical guarantees for fairness, privacy, robustness, and computational efficiency, and evaluate our methods on multiple medical image datasets and CIFAR10 in cross-s...