[2512.03363] Adaptive Aggregation with Two Gains in QFL
Summary
The paper presents A2G, a novel framework for adaptive aggregation in quantum federated learning, addressing performance issues due to client quality and network instability.
Why It Matters
As federated learning evolves, especially with quantum technologies, traditional aggregation methods become inadequate. This research introduces a dual gain approach that enhances model performance by considering client quality and geometric factors, making it crucial for future developments in machine learning and quantum computing.
Key Takeaways
- A2G framework improves aggregation in quantum federated learning.
- Addresses issues of uneven client quality and network instability.
- Incorporates geometric blending and client importance modulation.
- Enhances performance in heterogeneous classical networks.
- Provides a foundation for future research in quantum-enabled systems.
Computer Science > Machine Learning arXiv:2512.03363 (cs) [Submitted on 3 Dec 2025 (v1), last revised 18 Feb 2026 (this version, v2)] Title:Adaptive Aggregation with Two Gains in QFL Authors:S Nanayakkara View a PDF of the paper titled Adaptive Aggregation with Two Gains in QFL, by S Nanayakkara View PDF HTML (experimental) Abstract:Federated learning (FL) deployed over quantum enabled and heterogeneous classical networks faces significant performance degradation due to uneven client quality, stochastic teleportation fidelity, device instability, and geometric mismatch between local and global models. Classical aggregation rules assume euclidean topology and uniform communication reliability, limiting their suitability for emerging quantum federated systems. This paper introduces A2G (Adaptive Aggregation with Two Gains), a dual gain framework that jointly regulates geometric blending through a geometry gain and modulates client importance using a QoS gain derived from teleportation fidelity, latency, and instability. Subjects: Machine Learning (cs.LG); Quantum Physics (quant-ph) Cite as: arXiv:2512.03363 [cs.LG] (or arXiv:2512.03363v2 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2512.03363 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Shanika Iroshi Nanayakkara [view email] [v1] Wed, 3 Dec 2025 01:58:03 UTC (5,419 KB) [v2] Wed, 18 Feb 2026 03:07:58 UTC (5,419 KB) Full-text links: Access Paper: View a PDF of the paper titled Ada...