-
1.
公开(公告)号:US20190385043A1
公开(公告)日:2019-12-19
申请号:US16012356
申请日:2018-06-19
Applicant: Adobe Inc.
Inventor: Sunav Choudhary , Saurabh Kumar Mishra , Manoj Ghuhan A , Ankur Garg
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that asynchronously train a machine learning model across client devices that implement local versions of the model while preserving client data privacy. To train the model across devices, in some embodiments, the disclosed systems send global parameters for a global machine learning model from a server device to client devices. A subset of the client devices uses local machine learning models corresponding to the global model and client training data to modify the global parameters. Based on those modifications, the subset of client devices sends modified parameter indicators to the server device for the server device to use in adjusting the global parameters. By utilizing the modified parameter indicators (and not client training data), in certain implementations, the disclosed systems accurately train a machine learning model without exposing training data from the client device.
-
2.
公开(公告)号:US11593634B2
公开(公告)日:2023-02-28
申请号:US16012356
申请日:2018-06-19
Applicant: Adobe Inc.
Inventor: Sunav Choudhary , Saurabh Kumar Mishra , Manoj Ghuhan A , Ankur Garg
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that asynchronously train a machine learning model across client devices that implement local versions of the model while preserving client data privacy. To train the model across devices, in some embodiments, the disclosed systems send global parameters for a global machine learning model from a server device to client devices. A subset of the client devices uses local machine learning models corresponding to the global model and client training data to modify the global parameters. Based on those modifications, the subset of client devices sends modified parameter indicators to the server device for the server device to use in adjusting the global parameters. By utilizing the modified parameter indicators (and not client training data), in certain implementations, the disclosed systems accurately train a machine learning model without exposing training data from the client device.
-
公开(公告)号:US11170320B2
公开(公告)日:2021-11-09
申请号:US16040057
申请日:2018-07-19
Applicant: Adobe Inc.
Inventor: Ankur Garg , Sunav Choudhary , Saurabh Kumar Mishra , Manoj Ghuhan A.
Abstract: Systems and techniques are described herein for updating a machine learning model on edge servers. Local parameters of the machine learning model are updated at a plurality of edge servers using fresh data on the edge servers, rather than waiting for the data to reach a global server to update the machine learning model. Hence, latency is significantly reduced, making the systems and techniques described herein suitable for real-time services that support streaming data. Moreover, by updating global parameters of the machine learning model at a global server in a deterministic manner based on parameter updates from the edge servers, rather than by including randomization steps, global parameters of the converge quickly to their optimal values. The global parameters are sent from the global server to the plurality of edge servers at each iteration, thereby synchronizing the machine learning model on the edge servers.
-
公开(公告)号:US20200027033A1
公开(公告)日:2020-01-23
申请号:US16040057
申请日:2018-07-19
Applicant: Adobe Inc.
Inventor: Ankur Garg , Sunav Choudhary , Saurabh Kumar Mishra , Manoj Ghuhan A.
Abstract: Systems and techniques are described herein for updating a machine learning model on edge servers. Local parameters of the machine learning model are updated at a plurality of edge servers using fresh data on the edge servers, rather than waiting for the data to reach a global server to update the machine learning model. Hence, latency is significantly reduced, making the systems and techniques described herein suitable for real-time services that support streaming data. Moreover, by updating global parameters of the machine learning model at a global server in a deterministic manner based on parameter updates from the edge servers, rather than by including randomization steps, global parameters of the converge quickly to their optimal values. The global parameters are sent from the global server to the plurality of edge servers at each iteration, thereby synchronizing the machine learning model on the edge servers.
-
-
-