[2603.04436] ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation
About this article
Abstract page for arXiv paper 2603.04436: ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation
Computer Science > Machine Learning arXiv:2603.04436 (cs) [Submitted on 19 Feb 2026] Title:ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation Authors:Chuiyang Meng, Ming Tang, Vincent W.S. Wong View a PDF of the paper titled ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation, by Chuiyang Meng and 2 other authors View PDF HTML (experimental) Abstract:Federated fine-tuning of large language models (LLMs) enables collaborative tuning across distributed clients. However, due to the large size of LLMs, local updates in federated learning (FL) may incur substantial video random-access memory (VRAM) usage. Moreover, frequent model exchange may lead to significant communication overhead. To tackle these challenges, in this paper we propose ZorBA, a zeroth-order optimization-based federated fine-tuning framework with heterogeneous block activation. ZorBA leverages zeroth-order optimization to eliminate the storage of gradients at the clients by forward passes. ZorBA includes a heterogeneous block activation mechanism in which the central server allocates different subsets of transformer blocks to clients in order to accelerate the convergence rate and reduce the VRAM usage. Furthermore, ZorBA utilizes shared random seeds and the finite differences of gradients in order to reduce the communication overhead. We conduct theoretical analysis to characterize the effect of block activation decisions on the convergence ...