-
公开(公告)号:US20230385694A1
公开(公告)日:2023-11-30
申请号:US18202459
申请日:2023-05-26
Applicant: THE TORONTO-DOMINION BANK
Inventor: Jesse Cole Cresswell , Brendan Leigh Ross , Ka Ho Yenson Lau , Junfeng Wen , Yi Sui
IPC: G06N20/00
CPC classification number: G06N20/00
Abstract: Model training systems collaborate on model training without revealing respective private data sets. Each private data set learns a set of client weights for a set of computer models that are also learned during training. Inference for a particular private data set is determined as a mixture of the computer model parameters according to the client weights. During training, at each iteration, the client weights are updated in one step based on how well sampled models represent the private data set. In another step, gradients are determined for each sampled model and may be weighed according to the client weight for that model, relatively increasing the gradient contribution of a private data set for model parameters that correspond more highly to that private data set.
-
公开(公告)号:US20230153461A1
公开(公告)日:2023-05-18
申请号:US17987761
申请日:2022-11-15
Applicant: Hamid R. Tizhoosh , THE TORONTO-DOMINION BANK
Inventor: Shivam Kalra , Jesse Cole Cresswell , Junfeng Wen , Maksims Volkovs , Hamid R. Tizhoosh
IPC: G06F21/62
CPC classification number: G06F21/6245
Abstract: A model training system protects data leakage of private data in a federated learning environment by training a private model in conjunction with a proxy model. The proxy model is trained with protections for the private data and may be shared with other participants. Proxy models from other participants are used to train the private model, enabling the private model to benefit from parameters based on other models’ private data without privacy leakage. The proxy model may be trained with a differentially private algorithm that quantifies a privacy cost for the proxy model, enabling a participant to measure the potential exposure of private data and drop out. Iterations may include training the proxy and private models and then mixing the proxy models with other participants. The mixing may include updating and applying a bias to account for the weights of other participants in the received proxy models.
-