[2403.15605] Efficiently Assemble Normalization Layers and Regularization for Federated Domain Generalization
Summary
The paper presents a novel method, gPerXAN, for Federated Domain Generalization (FedDG) that enhances model performance by effectively assembling normalization layers and regularization techniques to address domain shift issues.
Why It Matters
This research is significant as it tackles the critical challenge of domain shift in machine learning, particularly in federated learning environments where privacy and communication costs are paramount. The proposed method shows promise in improving model generalization across unseen domains, which is essential for real-world applications.
Key Takeaways
- gPerXAN introduces a novel normalization scheme for FedDG.
- The method selectively filters domain-specific features while retaining discriminative power.
- Incorporates a regularizer to capture domain-invariant representations.
- Demonstrated superior performance on benchmark datasets PACS and Office-Home.
- Addresses privacy concerns and communication costs in federated learning.
Computer Science > Computer Vision and Pattern Recognition arXiv:2403.15605 (cs) [Submitted on 22 Mar 2024 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Efficiently Assemble Normalization Layers and Regularization for Federated Domain Generalization Authors:Khiem Le, Long Ho, Cuong Do, Danh Le-Phuoc, Kok-Seng Wong View a PDF of the paper titled Efficiently Assemble Normalization Layers and Regularization for Federated Domain Generalization, by Khiem Le and 4 other authors View PDF HTML (experimental) Abstract:Domain shift is a formidable issue in Machine Learning that causes a model to suffer from performance degradation when tested on unseen domains. Federated Domain Generalization (FedDG) attempts to train a global model using collaborative clients in a privacy-preserving manner that can generalize well to unseen clients possibly with domain shift. However, most existing FedDG methods either cause additional privacy risks of data leakage or induce significant costs in client communication and computation, which are major concerns in the Federated Learning paradigm. To circumvent these challenges, here we introduce a novel architectural method for FedDG, namely gPerXAN, which relies on a normalization scheme working with a guiding regularizer. In particular, we carefully design Personalized eXplicitly Assembled Normalization to enforce client models selectively filtering domain-specific features that are biased towards local data while retaining discrimination ...