[2506.15461] All is Not Lost: LLM Recovery without Checkpoints
About this article
Abstract page for arXiv paper 2506.15461: All is Not Lost: LLM Recovery without Checkpoints
Computer Science > Distributed, Parallel, and Cluster Computing arXiv:2506.15461 (cs) [Submitted on 18 Jun 2025 (v1), last revised 4 Apr 2026 (this version, v2)] Title:All is Not Lost: LLM Recovery without Checkpoints Authors:Nikolay Blagoev, Oğuzhan Ersoy, Lydia Yiyu Chen View a PDF of the paper titled All is Not Lost: LLM Recovery without Checkpoints, by Nikolay Blagoev and 2 other authors View PDF HTML (experimental) Abstract:Training LLMs on decentralized nodes or on-spot instances, lowers the training cost and enables model democratization. The inevitable challenge here is the transient churns of nodes due to failures and the operator's scheduling policies, leading to losing parts of the model (some layers). The conventional approaches to recover from failures is to either use checkpointing, where periodically a copy of the entire model is sent to an additional storage, or redundant computation. These approaches yield significant communication and/or computation overhead even in non-failure cases and scale poorly in settings with large models. In this paper we propose CheckFree, an efficient recovery method where a failing stage is substituted by weighted averaging of the closest neighboring stages. In contrast to the state of the art, CheckFree requires no additional computation or storage. However, because of the nature of averaging neighbouring stages, it can only recover failures of intermediate stages. We further extend our method to CheckFree+ with out-of-order ...