[2602.15380] Fractional-Order Federated Learning
Summary
The paper introduces Fractional-Order Federated Averaging (FOFedAvg), a novel federated learning approach that enhances model training efficiency and convergence speed by utilizing fractional-order updates.
Why It Matters
This research addresses significant challenges in federated learning, such as slow convergence and high communication costs, by proposing a method that improves efficiency and robustness, making it relevant for practitioners and researchers in machine learning and data privacy.
Key Takeaways
- FOFedAvg improves communication efficiency and accelerates convergence in federated learning.
- The method uses fractional-order updates to capture long-range relationships in data.
- FOFedAvg shows competitive performance against established federated optimization algorithms.
- Theoretical proofs support the convergence of FOFedAvg under standard assumptions.
- This approach offers a practical solution for distributed training on heterogeneous data.
Computer Science > Machine Learning arXiv:2602.15380 (cs) [Submitted on 17 Feb 2026] Title:Fractional-Order Federated Learning Authors:Mohammad Partohaghighi, Roummel Marcia, YangQuan Chen View a PDF of the paper titled Fractional-Order Federated Learning, by Mohammad Partohaghighi and 2 other authors View PDF HTML (experimental) Abstract:Federated learning (FL) allows remote clients to train a global model collaboratively while protecting client privacy. Despite its privacy-preserving benefits, FL has significant drawbacks, including slow convergence, high communication cost, and non-independent-and-identically-distributed (non-IID) data. In this work, we present a novel FedAvg variation called Fractional-Order Federated Averaging (FOFedAvg), which incorporates Fractional-Order Stochastic Gradient Descent (FOSGD) to capture long-range relationships and deeper historical information. By introducing memory-aware fractional-order updates, FOFedAvg improves communication efficiency and accelerates convergence while mitigating instability caused by heterogeneous, non-IID client data. We compare FOFedAvg against a broad set of established federated optimization algorithms on benchmark datasets including MNIST, FEMNIST, CIFAR-10, CIFAR-100, EMNIST, the Cleveland heart disease dataset, Sent140, PneumoniaMNIST, and Edge-IIoTset. Across a range of non-IID partitioning schemes, FOFedAvg is competitive with, and often outperforms, these baselines in terms of test performance and conv...