[2604.00050] Task-Centric Personalized Federated Fine-Tuning of Language Models
About this article
Abstract page for arXiv paper 2604.00050: Task-Centric Personalized Federated Fine-Tuning of Language Models
Computer Science > Machine Learning arXiv:2604.00050 (cs) [Submitted on 30 Mar 2026] Title:Task-Centric Personalized Federated Fine-Tuning of Language Models Authors:Gabriel U. Talasso, Meghdad Kurmanji, Allan M. de Souza, Nicholas D. Lane, Leandro A. Villas View a PDF of the paper titled Task-Centric Personalized Federated Fine-Tuning of Language Models, by Gabriel U. Talasso and Meghdad Kurmanji and Allan M. de Souza and Nicholas D. Lane and Leandro A. Villas View PDF HTML (experimental) Abstract:Federated Learning (FL) has emerged as a promising technique for training language models on distributed and private datasets of diverse tasks. However, aggregating models trained on heterogeneous tasks often degrades the overall performance of individual clients. To address this issue, Personalized FL (pFL) aims to create models tailored for each client's data distribution. Although these approaches improve local performance, they usually lack robustness in two aspects: (i) generalization: when clients must make predictions on unseen tasks, or face changes in their data distributions, and (ii) intra-client tasks interference: when a single client's data contains multiple distributions that may interfere with each other during local training. To tackle these two challenges, we propose FedRouter, a clustering-based pFL that builds specialized models for each task rather than for each client. FedRouter uses adapters to personalize models by employing two clustering mechanisms to a...