DECENTRALIZED DISTRIBUTED DEEP LEARNING
Abstract:
Various embodiments are provided for decentralized distributed deep learning by one or more processors in a computing system. Asynchronous distributed training of one or more machine learning models may be performed by generating a list of neighbour nodes for each node in a plurality of nodes and creating a first thread for continuous communication according to a weight management operation and a second thread for continuous computation of a gradient for each node. One or more variables are shared between the first thread and the second thread.
Patent Agency Ranking
0/0