[2602.13651] Cumulative Utility Parity for Fair Federated Learning under Intermittent Client Participation

[2602.13651] Cumulative Utility Parity for Fair Federated Learning under Intermittent Client Participation

arXiv - AI 3 min read Article

Summary

This paper introduces the concept of cumulative utility parity in federated learning, addressing fairness in client participation, particularly under intermittent conditions.

Why It Matters

As federated learning systems become more prevalent, ensuring fairness in client participation is crucial. This research highlights a new fairness principle that can enhance representation and performance in real-world applications, making it relevant for developers and researchers in AI and machine learning.

Key Takeaways

  • Cumulative utility parity evaluates long-term benefits for clients in federated learning.
  • The proposed method addresses biases from uneven client participation.
  • Experiments show improved representation parity without sacrificing performance.

Computer Science > Machine Learning arXiv:2602.13651 (cs) [Submitted on 14 Feb 2026] Title:Cumulative Utility Parity for Fair Federated Learning under Intermittent Client Participation Authors:Stefan Behfar, Richard Mortier View a PDF of the paper titled Cumulative Utility Parity for Fair Federated Learning under Intermittent Client Participation, by Stefan Behfar and Richard Mortier View PDF HTML (experimental) Abstract:In real-world federated learning (FL) systems, client participation is intermittent, heterogeneous, and often correlated with data characteristics or resource constraints. Existing fairness approaches in FL primarily focus on equalizing loss or accuracy conditional on participation, implicitly assuming that clients have comparable opportunities to contribute over time. However, when participation itself is uneven, these objectives can lead to systematic under-representation of intermittently available clients, even if per-round performance appears fair. We propose cumulative utility parity, a fairness principle that evaluates whether clients receive comparable long-term benefit per participation opportunity, rather than per training round. To operationalize this notion, we introduce availability-normalized cumulative utility, which disentangles unavoidable physical constraints from avoidable algorithmic bias arising from scheduling and aggregation. Experiments on temporally skewed, non-IID federated benchmarks demonstrate that our approach substantially im...

Related Articles

Llms

[R] The Lyra Technique — A framework for interpreting internal cognitive states in LLMs (Zenodo, open access)

We're releasing a paper on a new framework for reading and interpreting the internal cognitive states of large language models: "The Lyra...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] citracer: a small CLI tool to trace where a concept comes from in a citation graph

Hi all, I made a small tool that I've been using for my own literature reviews and figured I'd share in case it's useful to anyone else. ...

Reddit - Machine Learning · 1 min ·
Llms

Looking to build a production-level AI/ML project (agentic systems), need guidance on what to build

Hi everyone, I’m a final-year undergraduate AI/ML student currently focusing on applied AI / agentic systems. So far, I’ve spent time und...

Reddit - ML Jobs · 1 min ·
Meta is reentering the AI race with a new model called Muse Spark | The Verge
Machine Learning

Meta is reentering the AI race with a new model called Muse Spark | The Verge

Meta Superintelligence Labs has unveiled a new AI model called Muse Spark that will soon roll out across apps like Instagram and Facebook.

The Verge - AI · 5 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime