[2601.10940] HOSL: Hybrid-Order Split Learning for Memory-Constrained Edge Training
About this article
Abstract page for arXiv paper 2601.10940: HOSL: Hybrid-Order Split Learning for Memory-Constrained Edge Training
Computer Science > Machine Learning arXiv:2601.10940 (cs) [Submitted on 16 Jan 2026 (v1), last revised 4 Apr 2026 (this version, v4)] Title:HOSL: Hybrid-Order Split Learning for Memory-Constrained Edge Training Authors:Aakriti Lnu, Zhe Li, Dandan Liang, Chao Huang, Rui Li, Haibo Yang View a PDF of the paper titled HOSL: Hybrid-Order Split Learning for Memory-Constrained Edge Training, by Aakriti Lnu and 5 other authors View PDF HTML (experimental) Abstract:Split learning (SL) enables collaborative training of large language models (LLMs) between resource-constrained edge devices and compute-rich servers by partitioning model computation across the network boundary. However, existing SL systems predominantly rely on first-order (FO) optimization, which requires clients to store intermediate quantities such as activations for backpropagation. This results in substantial memory overhead, largely negating benefits of model partitioning. In contrast, zeroth-order (ZO) optimization eliminates backpropagation and significantly reduces memory usage, but often suffers from slow convergence and degraded performance. In this work, we propose HOSL, a novel Hybrid-Order Split Learning framework that addresses this fundamental trade-off between memory efficiency and optimization effectiveness by strategically integrating ZO optimization on the client side with FO optimization on the server side. By employing memory-efficient ZO gradient estimation at the client, HOSL eliminates backprop...