-
公开(公告)号:US20230385693A1
公开(公告)日:2023-11-30
申请号:US18202450
申请日:2023-05-26
Applicant: THE TORONTO-DOMINION BANK
Inventor: Jesse Cole Cresswell , Brendan Leigh Ross , Anthony Lawrence Caterini , Gabriel Loaiza Ganem
Abstract: Probability density modeling, such as for generative modeling, for data on a manifold of a high-dimensional space is performed with an implicitly-defined manifold such that points belonging to the manifold is the zero set of a manifold-defining function. An energy function is trained to learn an energy function that, evaluated on the manifold, describes a probability density for the manifold. As such, the relevant portions of the energy function are “filtered through” the defined manifold for training and in application. The combined energy function and manifold-defining function provide an “energy-based implicit manifold” that can more effectively model probability densities of a manifold in the high-dimensional space. As the manifold-defining function and the energy function are defined across the high-dimensional space, they may more effectively learn geometries and avoid distortions due to change in dimension that occur for models that model the manifold in a lower-dimensional space.
-
12.
公开(公告)号:US20230385444A1
公开(公告)日:2023-11-30
申请号:US18202440
申请日:2023-05-26
Applicant: THE TORONTO-DOMINION BANK
Inventor: Jesse Cole Cresswell , Atiyeh Ashari Ghomi , Yaqiao Luo , Maria Esipova
IPC: G06F21/62
CPC classification number: G06F21/6245
Abstract: A model evaluation system evaluates the extent to which privacy-aware training processes affect the direction of training gradients for groups. A modified differential-privacy (“DP”) training process provides per-sample gradient adjustments with parameters that may be adaptively modified for different data batches. Per-sample gradients are modified with respect to a reference bound and a clipping bound. A scaling factor may be determined for each per-sample gradient based on the higher of the reference bound or a magnitude of the per-sample gradient. Per-sample gradients may then be adjusted based on a ratio of the clipping bound to the scaling factor. A relative privacy cost between groups may be determined as excess training risk based on a difference in group gradient direction relative to an unadjusted batch gradient and the adjusted batch gradient according to the privacy-aware training.
-
公开(公告)号:US20230244917A1
公开(公告)日:2023-08-03
申请号:US18083345
申请日:2022-12-16
Applicant: THE TORONTO-DOMINION BANK
Inventor: Gabriel Loaiza Ganem , Brendan Leigh Ross , Jesse Cole Cresswell , Anthony Lawrence Caterini
IPC: G06N3/047 , G06N3/0455 , G06N3/088
CPC classification number: G06N3/047 , G06N3/0455 , G06N3/088
Abstract: To effectively learn a probability density from a data set in a high-dimensional space without manifold overfitting, a computer model first learns an autoencoder model that can transform data from a high-dimensional space to a low-dimensional space, and then learns a probability density model that may be effectively learned with maximum-likelihood. By separating these components, different types of models can be employed for each portion (e.g., manifold learning and density learning) and permits effective modeling of high-dimensional data sets that lie along a manifold representable with fewer dimensions, thus effectively learning both the density and the manifold and permitting effective data generation and density estimation.
-
公开(公告)号:US20230153461A1
公开(公告)日:2023-05-18
申请号:US17987761
申请日:2022-11-15
Applicant: Hamid R. Tizhoosh , THE TORONTO-DOMINION BANK
Inventor: Shivam Kalra , Jesse Cole Cresswell , Junfeng Wen , Maksims Volkovs , Hamid R. Tizhoosh
IPC: G06F21/62
CPC classification number: G06F21/6245
Abstract: A model training system protects data leakage of private data in a federated learning environment by training a private model in conjunction with a proxy model. The proxy model is trained with protections for the private data and may be shared with other participants. Proxy models from other participants are used to train the private model, enabling the private model to benefit from parameters based on other models’ private data without privacy leakage. The proxy model may be trained with a differentially private algorithm that quantifies a privacy cost for the proxy model, enabling a participant to measure the potential exposure of private data and drop out. Iterations may include training the proxy and private models and then mixing the proxy models with other participants. The mixing may include updating and applying a bias to account for the weights of other participants in the received proxy models.
-
-
-