[2602.15380] Fractional-Order Federated Learning

[2602.15380] Fractional-Order Federated Learning

arXiv - Machine Learning 3 min read Article

Summary

The paper introduces Fractional-Order Federated Averaging (FOFedAvg), a novel federated learning approach that enhances model training efficiency and convergence speed by utilizing fractional-order updates.

Why It Matters

This research addresses significant challenges in federated learning, such as slow convergence and high communication costs, by proposing a method that improves efficiency and robustness, making it relevant for practitioners and researchers in machine learning and data privacy.

Key Takeaways

  • FOFedAvg improves communication efficiency and accelerates convergence in federated learning.
  • The method uses fractional-order updates to capture long-range relationships in data.
  • FOFedAvg shows competitive performance against established federated optimization algorithms.
  • Theoretical proofs support the convergence of FOFedAvg under standard assumptions.
  • This approach offers a practical solution for distributed training on heterogeneous data.

Computer Science > Machine Learning arXiv:2602.15380 (cs) [Submitted on 17 Feb 2026] Title:Fractional-Order Federated Learning Authors:Mohammad Partohaghighi, Roummel Marcia, YangQuan Chen View a PDF of the paper titled Fractional-Order Federated Learning, by Mohammad Partohaghighi and 2 other authors View PDF HTML (experimental) Abstract:Federated learning (FL) allows remote clients to train a global model collaboratively while protecting client privacy. Despite its privacy-preserving benefits, FL has significant drawbacks, including slow convergence, high communication cost, and non-independent-and-identically-distributed (non-IID) data. In this work, we present a novel FedAvg variation called Fractional-Order Federated Averaging (FOFedAvg), which incorporates Fractional-Order Stochastic Gradient Descent (FOSGD) to capture long-range relationships and deeper historical information. By introducing memory-aware fractional-order updates, FOFedAvg improves communication efficiency and accelerates convergence while mitigating instability caused by heterogeneous, non-IID client data. We compare FOFedAvg against a broad set of established federated optimization algorithms on benchmark datasets including MNIST, FEMNIST, CIFAR-10, CIFAR-100, EMNIST, the Cleveland heart disease dataset, Sent140, PneumoniaMNIST, and Edge-IIoTset. Across a range of non-IID partitioning schemes, FOFedAvg is competitive with, and often outperforms, these baselines in terms of test performance and conv...

Related Articles

Machine Learning

[P] SpeakFlow - AI Dialogue Practice Coach with GLM 5.1

Built SpeakFlow for the Z.AI Builder Series hackathon. AI dialogue practice coach that evaluates your spoken responses in real-time. Two ...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[R] ICML Anonymized git repos for rebuttal

A number of the papers I'm reviewing for have submitted additional figures and code through anonymized git repos (e.g. https://anonymous....

Reddit - Machine Learning · 1 min ·
Llms

[R] Reference model free behavioral discovery of AudiBench model organisms via Probe-Mediated Adaptive Auditing

Anthropic's AuditBench - 56 Llama 3.3 70B models with planted hidden behaviors - their best agent detects the behaviros 10-13% of the tim...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime