[2601.10940] HOSL: Hybrid-Order Split Learning for Memory-Constrained Edge Training

[2601.10940] HOSL: Hybrid-Order Split Learning for Memory-Constrained Edge Training

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2601.10940: HOSL: Hybrid-Order Split Learning for Memory-Constrained Edge Training

Computer Science > Machine Learning arXiv:2601.10940 (cs) [Submitted on 16 Jan 2026 (v1), last revised 4 Apr 2026 (this version, v4)] Title:HOSL: Hybrid-Order Split Learning for Memory-Constrained Edge Training Authors:Aakriti Lnu, Zhe Li, Dandan Liang, Chao Huang, Rui Li, Haibo Yang View a PDF of the paper titled HOSL: Hybrid-Order Split Learning for Memory-Constrained Edge Training, by Aakriti Lnu and 5 other authors View PDF HTML (experimental) Abstract:Split learning (SL) enables collaborative training of large language models (LLMs) between resource-constrained edge devices and compute-rich servers by partitioning model computation across the network boundary. However, existing SL systems predominantly rely on first-order (FO) optimization, which requires clients to store intermediate quantities such as activations for backpropagation. This results in substantial memory overhead, largely negating benefits of model partitioning. In contrast, zeroth-order (ZO) optimization eliminates backpropagation and significantly reduces memory usage, but often suffers from slow convergence and degraded performance. In this work, we propose HOSL, a novel Hybrid-Order Split Learning framework that addresses this fundamental trade-off between memory efficiency and optimization effectiveness by strategically integrating ZO optimization on the client side with FO optimization on the server side. By employing memory-efficient ZO gradient estimation at the client, HOSL eliminates backprop...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Associative memory system for LLMs that learns during inference [P]

I've been working on MDA (Modular Dynamic Architecture), an online associative memory system for LLMs. Here's what I learned building it....

Reddit - Machine Learning · 1 min ·
Llms

Things I got wrong building a confidence evaluator for local LLMs [D]

I've been building **Autodidact**, a local-first AI agent framework. The central piece is a **confidence evaluator** - something that dec...

Reddit - Machine Learning · 1 min ·
Llms

I’m convinced 90% of you building "AI Agents" are just burning money on proxy providers. [D]

Seriously, I just audited my stack and realized I’m spending more on rotating residential proxies than I am on the actual Claude and Open...

Reddit - Machine Learning · 1 min ·
Llms

How do you test AI agents in production? The unpredictability is overwhelming.[D]

I’ve been in QA for almost a decade. My mental model for quality was always: given input X, assert output Y. Now I’m on a team that’s shi...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime