DEFENSE FROM MEMBERSHIP INFERENCE ATTACKS IN TRANSFER LEARNING

    公开(公告)号:US20240330757A1

    公开(公告)日:2024-10-03

    申请号:US18194603

    申请日:2023-03-31

    IPC分类号: G06N20/00

    CPC分类号: G06N20/00

    摘要: A computer-implemented method of training a machine learning model to prevent data leakage from membership inference attacks. A pre-trained model and a pre-defined hyperparameter λ are received as an input. A forward pass is applied by querying the pre-trained model with a private data. An initial loss distribution LINIT of loss values is computed. A batch loss of a minibatch from the private data is computed after beginning a fine-tuning operation to transform the pre-trained model into a fine-tuned model, and a batch loss distribution LBATCH is computed. A divergence metric is computed between LINIT and LBATCH, and the output of the divergence metric is multiplied with the pre-defined hyperparameter A to obtain a result that is added to the batch loss as a regularizer. The model parameters are updated by computing backpropagation on the regularized loss. The fine-tuned model is output.

    TRAINING MODELS FOR FEDERATED LEARNING
    2.
    发明公开

    公开(公告)号:US20240005215A1

    公开(公告)日:2024-01-04

    申请号:US17809826

    申请日:2022-06-29

    IPC分类号: G06N20/20

    CPC分类号: G06N20/20

    摘要: A method, system, and computer program product for training models for federated learning. The method determines, by a federated learning aggregator, a set of sample ratios for a set of participant systems. Each sample ratio is associated with a distinct participant system. A set of participant epsilon values are generated for the set of participant systems with each participant epsilon value being associated with a participant system of the set of participant systems. A set of surrogate data sets are received for the set of participant systems with each surrogate data set representing a data set of a participant system. The federated learning aggregator generates a set of local models. Each local model is generated based on a first global model. The method generates a second global model based on a prediction set generated by the set of participant systems using the set of local models.

    TOKENIZED FEDERATED LEARNING
    3.
    发明申请

    公开(公告)号:US20230017500A1

    公开(公告)日:2023-01-19

    申请号:US17373611

    申请日:2021-07-12

    IPC分类号: G06N20/00

    摘要: One embodiment of the invention provides a method for federated learning (FL) comprising training a machine learning (ML) model collaboratively by initiating a round of FL across data parties. Each data party is allocated tokens to utilize during the training. The method further comprises maintaining, for each data party, a corresponding data usage profile indicative of an amount of data the data party consumed during the training and a corresponding participation profile indicative of an amount of data the data party provided during the training. The method further comprises selectively allocating new tokens to the data parties based on each participation profile maintained, selectively allocating additional new tokens to the data parties based on each data usage profile maintained, and reimbursing one or more tokens utilized during the training to the data parties based on one or more measurements of accuracy of the ML model.

    MITIGATING ADVERSARIAL ATTACKS FOR SIMULTANEOUS PREDICTION AND OPTIMIZATION OF MODELS

    公开(公告)号:US20220414531A1

    公开(公告)日:2022-12-29

    申请号:US17358804

    申请日:2021-06-25

    IPC分类号: G06N20/00 G06N5/02

    摘要: An approach for providing prediction and optimization of an adversarial machine-learning model is disclosed. The approach can comprise of a training method for a defender that determines the optimal amount of adversarial training that would prevent the task optimization model from taking wrong decisions caused by an adversarial attack from the input into the model within the simultaneous predict and optimization framework. Essentially, the approach would train a robust model via adversarial training. Based on the robust training model, the user can mitigate against potential threats by (adversarial noise in the task-based optimization model) based on the given inputs from the machine learning prediction that was produced by an input.

    PARAMETER SHARING IN FEDERATED LEARNING

    公开(公告)号:US20210304062A1

    公开(公告)日:2021-09-30

    申请号:US16832809

    申请日:2020-03-27

    IPC分类号: G06N20/00

    摘要: One embodiment provides a method for federated learning across a plurality of data parties, comprising assigning each data party with a corresponding namespace in an object store, assigning a shared namespace in the object store, and triggering a round of federated learning by issuing a customized learning request to at least one data party. Each customized learning request issued to a data party triggers the data party to locally train a model based on training data owned by the data party and model parameters stored in the shared namespace, and upload a local model resulting from the local training to a corresponding namespace in the object store the data party is assigned with. The method further comprises retrieving, from the object store, local models uploaded to the object store during the round of federated learning, and aggregating the local models to obtain a shared model.

    MANAGING A CODE LOAD
    7.
    发明申请
    MANAGING A CODE LOAD 有权
    管理代码负载

    公开(公告)号:US20160210139A1

    公开(公告)日:2016-07-21

    申请号:US15079379

    申请日:2016-03-24

    IPC分类号: G06F9/445 G06F17/30

    摘要: A system for managing a code load for a storage system is disclosed. The system can include instantiating a code load. The code load can include a first update for a first component and a second update for a second component. The system can include monitoring the operational state of the first and second components in response to instantiating the code load. The system can also include determining to perform the first update in response to a triggering event. The system can also include performing the first update in response to determining to perform the first update.

    摘要翻译: 公开了一种用于管理存储系统的代码负载的系统。 系统可以包括实例化代码加载。 代码加载可以包括第一组件的第一更新和第二组件的第二更新。 系统可以包括响应于实例化代码负载来监视第一和第二组件的操作状态。 系统还可以包括确定响应于触发事件执行第一更新。 响应于确定执行第一次更新,系统还可以包括执行第一更新。

    REDUCING STORAGE FACILITY CODE LOAD SUSPEND RATE BY REDUNDANCY CHECK
    8.
    发明申请
    REDUCING STORAGE FACILITY CODE LOAD SUSPEND RATE BY REDUNDANCY CHECK 有权
    减少存储设施代码负荷停止率由REDUNDANCY检查

    公开(公告)号:US20150331687A1

    公开(公告)日:2015-11-19

    申请号:US14281576

    申请日:2014-05-19

    IPC分类号: G06F9/445

    摘要: Provided are techniques for code load processing. While performing code load processing of a set of modules of a same module type, it is determined that a first module in the set of modules is not in an operational state. It is determined that a second module is a redundant module for the first module. In response to determining that the second module is in an operational state and has already completed code update, the code load processing is continued. In response to determining that the second module is in an operational state and has not already completed code update, it is determined whether there is a third redundant module that is in an operational state. In response to determining that there is a third redundant module that is in an operational state, the code load processing is continued.

    摘要翻译: 提供了用于代码加载处理的技术。 在对相同模块类型的一组模块进行代码加载处理的同时,确定该组模块中的第一模块不处于操作状态。 确定第二模块是第一模块的冗余模块。 响应于确定第二模块处于操作状态并已经完成代码更新,继续执行代码加载处理。 响应于确定第二模块处于操作状态并且尚未完成代码更新,确定是否存在处于操作状态的第三冗余模块。 响应于确定存在处于操作状态的第三冗余模块,继续进行代码加载处理。