-
公开(公告)号:US20190340534A1
公开(公告)日:2019-11-07
申请号:US16335695
申请日:2017-09-07
Applicant: Google LLC
Inventor: Hugh Brendan McMahan , Dave Morris Bacon , Jakub Konecny , Xinnan Yu
Abstract: The present disclosure provides efficient communication techniques for transmission of model updates within a machine learning framework, such as, for example, a federated learning framework in which a high-quality centralized model is trained on training data distributed overt a large number of clients each with unreliable network connections and low computational power. In an example federated learning setting, in each of a plurality of rounds, each client independently updates the model based on its local data and communicates the updated model back to the server, where all the client-side updates are used to update a global model. The present disclosure provides systems and methods that reduce communication costs. In particular, the present disclosure provides at least: structured update approaches in which the model update is restricted to be small and sketched update approaches in which the model update is compressed before sending to the server.
-
公开(公告)号:US20230376856A1
公开(公告)日:2023-11-23
申请号:US18365734
申请日:2023-08-04
Applicant: Google LLC
Inventor: Hugh Brendan McMahan , Dave Morris Bacon , Jakub Konecny , Xinnan Yu
Abstract: The present disclosure provides efficient communication techniques for transmission of model updates within a machine learning framework, such as, for example, a federated learning framework in which a high-quality centralized model is trained on training data distributed overt a large number of clients each with unreliable network connections and low computational power. In an example federated learning setting, in each of a plurality of rounds, each client independently updates the model based on its local data and communicates the updated model back to the server, where all the client-side updates are used to update a global model. The present disclosure provides systems and methods that reduce communication costs. In particular, the present disclosure provides at least: structured update approaches in which the model update is restricted to be small and sketched update approaches in which the model update is compressed before sending to the server.
-
23.
公开(公告)号:US10510021B1
公开(公告)日:2019-12-17
申请号:US16434627
申请日:2019-06-07
Applicant: Google LLC
Inventor: Satyen Chandrakant Kale , Daniel Holtmann-Rice , Sanjiv Kumar , Enxu Yan , Xinnan Yu
Abstract: Systems and methods for evaluating a loss function or a gradient of the loss function. In one example embodiment, a computer-implemented method includes partitioning a weight matrix into a plurality of blocks. The method includes identifying a first set of labels for each of the plurality of blocks with a score greater than a first threshold value. The method includes constructing a sparse approximation of a scoring vector for each of the plurality of blocks based on the first set of labels. The method includes determining a correction value for each sparse approximation of the scoring vector. The method includes determining an approximation of a loss or a gradient of a loss associated with the scoring function based on each sparse approximation of the scoring vector and the correction value associated with the sparse approximation of the scoring vector.
-
24.
公开(公告)号:US20190378037A1
公开(公告)日:2019-12-12
申请号:US16434627
申请日:2019-06-07
Applicant: Google LLC
Inventor: Satyen Chandrakant Kale , Daniel Holtmann-Rice , Sanjiv Kumar , Enxu Yan , Xinnan Yu
Abstract: Systems and methods for evaluating a loss function or a gradient of the loss function. In one example embodiment, a computer-implemented method includes partitioning a weight matrix into a plurality of blocks. The method includes identifying a first set of labels for each of the plurality of blocks with a score greater than a first threshold value. The method includes constructing a sparse approximation of a scoring vector for each of the plurality of blocks based on the first set of labels. The method includes determining a correction value for each sparse approximation of the scoring vector. The method includes determining an approximation of a loss or a gradient of a loss associated with the scoring function based on each sparse approximation of the scoring vector and the correction value associated with the sparse approximation of the scoring vector.
-
-
-