[2602.18181] SeedFlood: A Step Toward Scalable Decentralized Training of LLMs
Summary
The paper presents SeedFlood, a novel approach for scalable decentralized training of large language models (LLMs) that minimizes communication overhead while achieving global consensus.
Why It Matters
As the demand for larger and more efficient AI models grows, traditional decentralized training methods face scalability challenges. SeedFlood addresses these issues, enabling the training of billion-parameter models across numerous clients, thus pushing the boundaries of what's feasible in decentralized AI.
Key Takeaways
- SeedFlood reduces communication overhead in decentralized training.
- The method allows for efficient training of large models across complex networks.
- Experiments show SeedFlood outperforms traditional gossip-based methods.
- It achieves comparable results to first-order methods in large-scale settings.
- This innovation opens new possibilities for decentralized AI applications.
Computer Science > Machine Learning arXiv:2602.18181 (cs) [Submitted on 20 Feb 2026] Title:SeedFlood: A Step Toward Scalable Decentralized Training of LLMs Authors:Jihun Kim, Namhoon Lee View a PDF of the paper titled SeedFlood: A Step Toward Scalable Decentralized Training of LLMs, by Jihun Kim and 1 other authors View PDF HTML (experimental) Abstract:This work presents a new approach to decentralized training-SeedFlood-designed to scale for large models across complex network topologies and achieve global consensus with minimal communication overhead. Traditional gossip-based methods suffer from message communication costs that grow with model size, while information decay over network hops renders global consensus inefficient. SeedFlood departs from these practices by exploiting the seed-reconstructible structure of zeroth-order updates and effectively making the messages near-zero in size, allowing them to be flooded to every client in the network. This mechanism makes communication overhead negligible and independent of model size, removing the primary scalability bottleneck in decentralized training. Consequently, SeedFlood enables training in regimes previously considered impractical, such as billion-parameter models distributed across hundreds of clients. Our experiments on decentralized LLM fine-tuning demonstrate thatSeedFlood consistently outperforms gossip-based baselines in both generalization performance and communication efficiency, and even achieves results...