[2509.15147] Who to Trust? Aggregating Client Predictions in Federated Distillation
About this article
Abstract page for arXiv paper 2509.15147: Who to Trust? Aggregating Client Predictions in Federated Distillation
Computer Science > Machine Learning arXiv:2509.15147 (cs) [Submitted on 18 Sep 2025 (v1), last revised 25 Mar 2026 (this version, v2)] Title:Who to Trust? Aggregating Client Predictions in Federated Distillation Authors:Viktor Kovalchuk, Denis Son, Arman Bolatov, Mohsen Guizani, Samuel Horváth, Maxim Panov, Martin Takáč, Eduard Gorbunov, Nikita Kotelevskii View a PDF of the paper titled Who to Trust? Aggregating Client Predictions in Federated Distillation, by Viktor Kovalchuk and 8 other authors View PDF HTML (experimental) Abstract:Under data heterogeneity (e.g., $\textit{class mismatch}$), clients may produce unreliable predictions for instances belonging to unfamiliar classes. An equally weighted combination of such predictions can corrupt the teacher signal used for distillation. In this paper, we provide a theoretical analysis of Federated Distillation and show that aggregating client predictions on a shared public dataset converges to a neighborhood of the optimum, where the neighborhood size is governed by the aggregation quality. We further propose two uncertainty-aware aggregation methods, $\mathbf{UWA}$ and $\mathbf{sUWA}$, which leverage density-based uncertainty estimates to down-weight unreliable client predictions. Experiments on image and text classification benchmarks demonstrate that our methods are particularly effective under high data heterogeneity, while matching standard averaging when heterogeneity is low. Subjects: Machine Learning (cs.LG) Cite as:...