ADVERSARIAL INTERPOLATION BACKDOOR DETECTION

    公开(公告)号:US20220114259A1

    公开(公告)日:2022-04-14

    申请号:US17068853

    申请日:2020-10-13

    IPC分类号: G06F21/56 G06N20/00 G06N5/04

    摘要: One or more computer processors determine a tolerance value, and a norm value associated with an untrusted model and an adversarial training method. The one or more computer processors generate a plurality of interpolated adversarial images ranging between a pair of images utilizing the adversarial training method, wherein each image in the pair of images is from a different class. The one or more computer processors detect a backdoor associated with the untrusted model utilizing the generated plurality of interpolated adversarial images. The one or more computer processors harden the untrusted model by training the untrusted model with the generated plurality of interpolated adversarial images.

    TOKENIZED FEDERATED LEARNING
    2.
    发明申请

    公开(公告)号:US20230017500A1

    公开(公告)日:2023-01-19

    申请号:US17373611

    申请日:2021-07-12

    IPC分类号: G06N20/00

    摘要: One embodiment of the invention provides a method for federated learning (FL) comprising training a machine learning (ML) model collaboratively by initiating a round of FL across data parties. Each data party is allocated tokens to utilize during the training. The method further comprises maintaining, for each data party, a corresponding data usage profile indicative of an amount of data the data party consumed during the training and a corresponding participation profile indicative of an amount of data the data party provided during the training. The method further comprises selectively allocating new tokens to the data parties based on each participation profile maintained, selectively allocating additional new tokens to the data parties based on each data usage profile maintained, and reimbursing one or more tokens utilized during the training to the data parties based on one or more measurements of accuracy of the ML model.

    MITIGATING ADVERSARIAL ATTACKS FOR SIMULTANEOUS PREDICTION AND OPTIMIZATION OF MODELS

    公开(公告)号:US20220414531A1

    公开(公告)日:2022-12-29

    申请号:US17358804

    申请日:2021-06-25

    IPC分类号: G06N20/00 G06N5/02

    摘要: An approach for providing prediction and optimization of an adversarial machine-learning model is disclosed. The approach can comprise of a training method for a defender that determines the optimal amount of adversarial training that would prevent the task optimization model from taking wrong decisions caused by an adversarial attack from the input into the model within the simultaneous predict and optimization framework. Essentially, the approach would train a robust model via adversarial training. Based on the robust training model, the user can mitigate against potential threats by (adversarial noise in the task-based optimization model) based on the given inputs from the machine learning prediction that was produced by an input.