[2511.16992] FIRM: Federated In-client Regularized Multi-objective Alignment for Large Language Models
About this article
Abstract page for arXiv paper 2511.16992: FIRM: Federated In-client Regularized Multi-objective Alignment for Large Language Models
Computer Science > Machine Learning arXiv:2511.16992 (cs) [Submitted on 21 Nov 2025 (v1), last revised 26 Mar 2026 (this version, v2)] Title:FIRM: Federated In-client Regularized Multi-objective Alignment for Large Language Models Authors:Fatemeh Nourzad, Amirhossein Roknilamouki, Eylem Ekici, Jia Liu, Ness Shroff View a PDF of the paper titled FIRM: Federated In-client Regularized Multi-objective Alignment for Large Language Models, by Fatemeh Nourzad and 4 other authors View PDF HTML (experimental) Abstract:Aligning Large Language Models (LLMs) with human values often involves balancing multiple, conflicting objectives such as helpfulness and harmlessness. Training these models is computationally intensive, and centralizing the process raises significant data privacy concerns. Federated Learning (FL) offers a compelling alternative, but existing Federated Multi-Objective Optimization (FMOO) methods face severe communication bottlenecks as their reliance on transmitting multiple gradients to a server is unscalable for large models. We introduce FIRM (Federated In-client Regularized Multi-objective alignment), a novel algorithm that achieves both client disagreement drift mitigation and communication efficiency. In FIRM, each client locally solves a regularized multi-objective optimization problem. By directly mitigating client disagreement drift through in-client regularization, our method eliminates the need for the multi-gradient transmissions common in prior works. Con...