[2512.03363] Adaptive Aggregation with Two Gains in QFL

[2512.03363] Adaptive Aggregation with Two Gains in QFL

arXiv - Machine Learning 3 min read Article

Summary

The paper presents A2G, a novel framework for adaptive aggregation in quantum federated learning, addressing performance issues due to client quality and network instability.

Why It Matters

As federated learning evolves, especially with quantum technologies, traditional aggregation methods become inadequate. This research introduces a dual gain approach that enhances model performance by considering client quality and geometric factors, making it crucial for future developments in machine learning and quantum computing.

Key Takeaways

  • A2G framework improves aggregation in quantum federated learning.
  • Addresses issues of uneven client quality and network instability.
  • Incorporates geometric blending and client importance modulation.
  • Enhances performance in heterogeneous classical networks.
  • Provides a foundation for future research in quantum-enabled systems.

Computer Science > Machine Learning arXiv:2512.03363 (cs) [Submitted on 3 Dec 2025 (v1), last revised 18 Feb 2026 (this version, v2)] Title:Adaptive Aggregation with Two Gains in QFL Authors:S Nanayakkara View a PDF of the paper titled Adaptive Aggregation with Two Gains in QFL, by S Nanayakkara View PDF HTML (experimental) Abstract:Federated learning (FL) deployed over quantum enabled and heterogeneous classical networks faces significant performance degradation due to uneven client quality, stochastic teleportation fidelity, device instability, and geometric mismatch between local and global models. Classical aggregation rules assume euclidean topology and uniform communication reliability, limiting their suitability for emerging quantum federated systems. This paper introduces A2G (Adaptive Aggregation with Two Gains), a dual gain framework that jointly regulates geometric blending through a geometry gain and modulates client importance using a QoS gain derived from teleportation fidelity, latency, and instability. Subjects: Machine Learning (cs.LG); Quantum Physics (quant-ph) Cite as: arXiv:2512.03363 [cs.LG]   (or arXiv:2512.03363v2 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2512.03363 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Shanika Iroshi Nanayakkara [view email] [v1] Wed, 3 Dec 2025 01:58:03 UTC (5,419 KB) [v2] Wed, 18 Feb 2026 03:07:58 UTC (5,419 KB) Full-text links: Access Paper: View a PDF of the paper titled Ada...

Related Articles

Machine Learning

Meta Unveils New A.I. Model, Its First From the Superintelligence Lab

Meta has introduced a new A.I. model, marking the first release from its Superintelligence Lab.

AI Tools & Products · 1 min ·
Anthropic’s ‘Claude Mythos’ model sparks fear of AI doomsday if released to public: ‘Weapons we can’t even envision’
Llms

Anthropic’s ‘Claude Mythos’ model sparks fear of AI doomsday if released to public: ‘Weapons we can’t even envision’

Anthropic has triggered alarm bells by touting the terrifying capabilities of “Claude Mythos” – with executives warning the new AI model ...

AI Tools & Products · 6 min ·
Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table
Machine Learning

Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table

Muse Spark is Meta’s first model since its AI reboot, and the benchmarks suggest formidable performance.

Wired - AI · 6 min ·
Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions
Machine Learning

Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions

Meta debuted its first major large language model, Muse Spark, spearheaded by chief AI officer Alexandr Wang, who leads Meta Superintelli...

AI Tools & Products · 6 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime