[2403.15605] Efficiently Assemble Normalization Layers and Regularization for Federated Domain Generalization

[2403.15605] Efficiently Assemble Normalization Layers and Regularization for Federated Domain Generalization

arXiv - Machine Learning 4 min read Article

Summary

The paper presents a novel method, gPerXAN, for Federated Domain Generalization (FedDG) that enhances model performance by effectively assembling normalization layers and regularization techniques to address domain shift issues.

Why It Matters

This research is significant as it tackles the critical challenge of domain shift in machine learning, particularly in federated learning environments where privacy and communication costs are paramount. The proposed method shows promise in improving model generalization across unseen domains, which is essential for real-world applications.

Key Takeaways

  • gPerXAN introduces a novel normalization scheme for FedDG.
  • The method selectively filters domain-specific features while retaining discriminative power.
  • Incorporates a regularizer to capture domain-invariant representations.
  • Demonstrated superior performance on benchmark datasets PACS and Office-Home.
  • Addresses privacy concerns and communication costs in federated learning.

Computer Science > Computer Vision and Pattern Recognition arXiv:2403.15605 (cs) [Submitted on 22 Mar 2024 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Efficiently Assemble Normalization Layers and Regularization for Federated Domain Generalization Authors:Khiem Le, Long Ho, Cuong Do, Danh Le-Phuoc, Kok-Seng Wong View a PDF of the paper titled Efficiently Assemble Normalization Layers and Regularization for Federated Domain Generalization, by Khiem Le and 4 other authors View PDF HTML (experimental) Abstract:Domain shift is a formidable issue in Machine Learning that causes a model to suffer from performance degradation when tested on unseen domains. Federated Domain Generalization (FedDG) attempts to train a global model using collaborative clients in a privacy-preserving manner that can generalize well to unseen clients possibly with domain shift. However, most existing FedDG methods either cause additional privacy risks of data leakage or induce significant costs in client communication and computation, which are major concerns in the Federated Learning paradigm. To circumvent these challenges, here we introduce a novel architectural method for FedDG, namely gPerXAN, which relies on a normalization scheme working with a guiding regularizer. In particular, we carefully design Personalized eXplicitly Assembled Normalization to enforce client models selectively filtering domain-specific features that are biased towards local data while retaining discrimination ...

Related Articles

Machine Learning

[D] How does distributed proof of work computing handle the coordination needs of neural network training?

[D] Ive been trying to understand the technical setup of a project called Qubic. It claims to use distributed proof of work computing for...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] VLMs Behavior for Long Video Understanding

I have extensively searched on long video understanding datasets such as Video-MME, MLVU, VideoBench, LongVideoBench and etc. What I have...

Reddit - Machine Learning · 1 min ·
Llms

My AI spent last night modifying its own codebase

I've been working on a local AI system called Apis that runs completely offline through Ollama. During a background run, Apis identified ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Fake users generated by AI can't simulate humans — review of 182 research papers. Your thoughts?

https://www.researchsquare.com/article/rs-9057643/v1 There’s a massive trend right now where tech companies, businesses, even researchers...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime