[2602.21721] Private and Robust Contribution Evaluation in Federated Learning

[2602.21721] Private and Robust Contribution Evaluation in Federated Learning

arXiv - Machine Learning 4 min read Article

Summary

This paper presents novel methods for evaluating contributions in federated learning while ensuring privacy and robustness, addressing vulnerabilities in existing approaches.

Why It Matters

As federated learning gains traction, ensuring fair and secure contribution evaluation is crucial for collaborative machine learning. This research introduces methods that balance privacy with effective performance evaluation, which is essential for real-world applications, particularly in sensitive fields like healthcare.

Key Takeaways

  • Introduces two new contribution evaluation methods compatible with secure aggregation.
  • Methods ensure fairness, privacy, and robustness in federated learning contexts.
  • Demonstrates improved performance over existing evaluation baselines in practical applications.
  • Addresses vulnerabilities in current marginal-contribution methods.
  • Provides theoretical guarantees for the proposed methods' effectiveness.

Computer Science > Cryptography and Security arXiv:2602.21721 (cs) [Submitted on 25 Feb 2026] Title:Private and Robust Contribution Evaluation in Federated Learning Authors:Delio Jaramillo Velez, Gergely Biczok, Alexandre Graell i Amat, Johan Ostman, Balazs Pejo View a PDF of the paper titled Private and Robust Contribution Evaluation in Federated Learning, by Delio Jaramillo Velez and 4 other authors View PDF Abstract:Cross-silo federated learning allows multiple organizations to collaboratively train machine learning models without sharing raw data, but client updates can still leak sensitive information through inference attacks. Secure aggregation protects privacy by hiding individual updates, yet it complicates contribution evaluation, which is critical for fair rewards and detecting low-quality or malicious participants. Existing marginal-contribution methods, such as the Shapley value, are incompatible with secure aggregation, and practical alternatives, such as Leave-One-Out, are crude and rely on self-evaluation. We introduce two marginal-difference contribution scores compatible with secure aggregation. Fair-Private satisfies standard fairness axioms, while Everybody-Else eliminates self-evaluation and provides resistance to manipulation, addressing a largely overlooked vulnerability. We provide theoretical guarantees for fairness, privacy, robustness, and computational efficiency, and evaluate our methods on multiple medical image datasets and CIFAR10 in cross-s...

Related Articles

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch
Machine Learning

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch

Less than a year after launching, with checks from some of the biggest names in Silicon Valley, crowdsourced AI model feedback startup Yu...

TechCrunch - AI · 4 min ·
Machine Learning

[R] Fine-tuning services report

If you have some data and want to train or run a small custom model but don't have powerful enough hardware for training, fine-tuning ser...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Does ML have a "bible"/reference textbook at the Intermediate/Advanced level?

Hello, everyone! This is my first time posting here and I apologise if the question is, perhaps, a bit too basic for this sub-reddit. A b...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML 2026 review policy debate: 100 responses suggest Policy B may score higher, while Policy A shows higher confidence

A week ago I made a thread asking whether ICML 2026’s review policy might have affected review outcomes, especially whether Policy A pape...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime