[D] How does distributed proof of work computing handle the coordination needs of neural network training?
About this article
[D] Ive been trying to understand the technical setup of a project called Qubic. It claims to use distributed proof of work computing for neural network training. I want to know if the idea holds together technically.The main issue with distributed training is coordination. Training large neural networks needs frequent sharing of gradient updates across nodes. This process is sensitive to delays and works far better with fast connections inside a data center than over the internet on separate...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket