[2603.04436] ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation

[2603.04436] ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.04436: ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation

Computer Science > Machine Learning arXiv:2603.04436 (cs) [Submitted on 19 Feb 2026] Title:ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation Authors:Chuiyang Meng, Ming Tang, Vincent W.S. Wong View a PDF of the paper titled ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation, by Chuiyang Meng and 2 other authors View PDF HTML (experimental) Abstract:Federated fine-tuning of large language models (LLMs) enables collaborative tuning across distributed clients. However, due to the large size of LLMs, local updates in federated learning (FL) may incur substantial video random-access memory (VRAM) usage. Moreover, frequent model exchange may lead to significant communication overhead. To tackle these challenges, in this paper we propose ZorBA, a zeroth-order optimization-based federated fine-tuning framework with heterogeneous block activation. ZorBA leverages zeroth-order optimization to eliminate the storage of gradients at the clients by forward passes. ZorBA includes a heterogeneous block activation mechanism in which the central server allocates different subsets of transformer blocks to clients in order to accelerate the convergence rate and reduce the VRAM usage. Furthermore, ZorBA utilizes shared random seeds and the finite differences of gradients in order to reduce the communication overhead. We conduct theoretical analysis to characterize the effect of block activation decisions on the convergence ...

Originally published on March 06, 2026. Curated by AI News.

Related Articles

Llms

[D] How's MLX and jax/ pytorch on MacBooks these days?

​ So I'm looking at buying a new 14 inch MacBook pro with m5 pro and 64 gb of memory vs m4 max with same specs. My priorities are pro sof...

Reddit - Machine Learning · 1 min ·
Llms

[R] 94.42% on BANKING77 Official Test Split with Lightweight Embedding + Example Reranking (strict full-train protocol)

BANKING77 (77 fine-grained banking intents) is a well-established but increasingly saturated intent classification benchmark. did this wh...

Reddit - Machine Learning · 1 min ·
The “Agony” or ChatGPT: Would You Let AI Write Your Wedding Speech?
Llms

The “Agony” or ChatGPT: Would You Let AI Write Your Wedding Speech?

As more Americans use AI chatbots like ChatGPT to compose their wedding vows, one expert asks: “Is the speech sacred to you?”

AI Tools & Products · 12 min ·
I tested Gemini on Android Auto and now I can't stop talking to it: 5 tasks it nails
Llms

I tested Gemini on Android Auto and now I can't stop talking to it: 5 tasks it nails

I didn't see much benefit for Google's AI - until now. Here are my favorite ways to use the new Gemini integration in my car.

AI Tools & Products · 7 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime