[2602.14244] Federated Ensemble Learning with Progressive Model Personalization
Summary
This paper presents a novel framework for Federated Ensemble Learning that enhances model personalization while addressing statistical heterogeneity across clients.
Why It Matters
The proposed method improves personalization in federated learning, which is crucial for applications requiring privacy and tailored models. By effectively managing the trade-off between shared and client-specific components, this research could lead to better performance in real-world scenarios where data distribution varies significantly.
Key Takeaways
- Introduces a boosting-inspired framework for personalized federated learning.
- Balances complexity between shared and client-specific model components.
- Demonstrates improved performance on benchmark datasets under heterogeneous conditions.
- Provides theoretical guarantees for model generalization.
- Addresses overfitting issues by progressively increasing model expressiveness.
Statistics > Machine Learning arXiv:2602.14244 (stat) [Submitted on 15 Feb 2026] Title:Federated Ensemble Learning with Progressive Model Personalization Authors:Ala Emrani, Amir Najafi, Abolfazl Motahari View a PDF of the paper titled Federated Ensemble Learning with Progressive Model Personalization, by Ala Emrani and 2 other authors View PDF HTML (experimental) Abstract:Federated Learning provides a privacy-preserving paradigm for distributed learning, but suffers from statistical heterogeneity across clients. Personalized Federated Learning (PFL) mitigates this issue by considering client-specific models. A widely adopted approach in PFL decomposes neural networks into a shared feature extractor and client-specific heads. While effective, this design induces a fundamental tradeoff: deep or expressive shared components hinder personalization, whereas large local heads exacerbate overfitting under limited per-client data. Most existing methods rely on rigid, shallow heads, and therefore fail to navigate this tradeoff in a principled manner. In this work, we propose a boosting-inspired framework that enables a smooth control of this tradeoff. Instead of training a single personalized model, we construct an ensemble of $T$ models for each client. Across boosting iterations, the depth of the personalized component are progressively increased, while its effective complexity is systematically controlled via low-rank factorization or width shrinkage. This design simultaneously...