-
公开(公告)号:US11763197B2
公开(公告)日:2023-09-19
申请号:US16850053
申请日:2020-04-16
Applicant: Google LLC
Inventor: Hugh Brendan McMahan , Dave Morris Bacon , Jakub Konecny , Xinnan Yu
Abstract: The present disclosure provides efficient communication techniques for transmission of model updates within a machine learning framework, such as, for example, a federated learning framework in which a high-quality centralized model is trained on training data distributed overt a large number of clients each with unreliable network connections and low computational power. In an example federated learning setting, in each of a plurality of rounds, each client independently updates the model based on its local data and communicates the updated model back to the server, where all the client-side updates are used to update a global model. The present disclosure provides systems and methods that reduce communication costs. In particular, the present disclosure provides at least: structured update approaches in which the model update is restricted to be small and sketched update approaches in which the model update is compressed before sending to the server.
-
公开(公告)号:US20200242514A1
公开(公告)日:2020-07-30
申请号:US16850053
申请日:2020-04-16
Applicant: Google LLC
Inventor: Hugh Brendan McMahan , Dave Morris Bacon , Jakub Konecny , Xinnan Yu
Abstract: The present disclosure provides efficient communication techniques for transmission of model updates within a machine learning framework, such as, for example, a federated learning framework in which a high-quality centralized model is trained on training data distributed overt a large number of clients each with unreliable network connections and low computational power. In an example federated learning setting, in each of a plurality of rounds, each client independently updates the model based on its local data and communicates the updated model back to the server, where all the client-side updates are used to update a global model. The present disclosure provides systems and methods that reduce communication costs. In particular, the present disclosure provides at least: structured update approaches in which the model update is restricted to be small and sketched update approaches in which the model update is compressed before sending to the server.
-
公开(公告)号:US20190340534A1
公开(公告)日:2019-11-07
申请号:US16335695
申请日:2017-09-07
Applicant: Google LLC
Inventor: Hugh Brendan McMahan , Dave Morris Bacon , Jakub Konecny , Xinnan Yu
Abstract: The present disclosure provides efficient communication techniques for transmission of model updates within a machine learning framework, such as, for example, a federated learning framework in which a high-quality centralized model is trained on training data distributed overt a large number of clients each with unreliable network connections and low computational power. In an example federated learning setting, in each of a plurality of rounds, each client independently updates the model based on its local data and communicates the updated model back to the server, where all the client-side updates are used to update a global model. The present disclosure provides systems and methods that reduce communication costs. In particular, the present disclosure provides at least: structured update approaches in which the model update is restricted to be small and sketched update approaches in which the model update is compressed before sending to the server.
-
公开(公告)号:US10657461B2
公开(公告)日:2020-05-19
申请号:US16335695
申请日:2017-09-07
Applicant: Google LLC
Inventor: Hugh Brendan McMahan , Dave Morris Bacon , Jakub Konecny , Xinnan Yu
Abstract: The present disclosure provides efficient communication techniques for transmission of model updates within a machine learning framework, such as, for example, a federated learning framework in which a high-quality centralized model is trained on training data distributed overt a large number of clients each with unreliable network connections and low computational power. In an example federated learning setting, in each of a plurality of rounds, each client independently updates the model based on its local data and communicates the updated model back to the server, where all the client-side updates are used to update a global model. The present disclosure provides systems and methods that reduce communication costs. In particular, the present disclosure provides at least: structured update approaches in which the model update is restricted to be small and sketched update approaches in which the model update is compressed before sending to the server.
-
公开(公告)号:US20230376856A1
公开(公告)日:2023-11-23
申请号:US18365734
申请日:2023-08-04
Applicant: Google LLC
Inventor: Hugh Brendan McMahan , Dave Morris Bacon , Jakub Konecny , Xinnan Yu
Abstract: The present disclosure provides efficient communication techniques for transmission of model updates within a machine learning framework, such as, for example, a federated learning framework in which a high-quality centralized model is trained on training data distributed overt a large number of clients each with unreliable network connections and low computational power. In an example federated learning setting, in each of a plurality of rounds, each client independently updates the model based on its local data and communicates the updated model back to the server, where all the client-side updates are used to update a global model. The present disclosure provides systems and methods that reduce communication costs. In particular, the present disclosure provides at least: structured update approaches in which the model update is restricted to be small and sketched update approaches in which the model update is compressed before sending to the server.
-
-
-
-