Generating trained neural networks with increased robustness against adversarial attacks

    公开(公告)号:US11829880B2

    公开(公告)日:2023-11-28

    申请号:US18049209

    申请日:2022-10-24

    Applicant: Adobe Inc.

    CPC classification number: G06N3/08 G06N20/00 H04L63/1441

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating trained neural network with increased robustness against adversarial attacks by utilizing a dynamic dropout routine and/or a cyclic learning rate routine. For example, the disclosed systems can determine a dynamic dropout probability distribution associated with neurons of a neural network. The disclosed systems can further drop neurons from a neural network based on the dynamic dropout probability distribution to help neurons learn distinguishable features. In addition, the disclosed systems can utilize a cyclic learning rate routine to force copy weights of a copy neural network away from weights of an original neural network without decreasing prediction accuracy to ensure that the decision boundaries learned are different.

    Generating trained neural networks with increased robustness against adversarial attacks

    公开(公告)号:US11481617B2

    公开(公告)日:2022-10-25

    申请号:US16253561

    申请日:2019-01-22

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating trained neural network with increased robustness against adversarial attacks by utilizing a dynamic dropout routine and/or a cyclic learning rate routine. For example, the disclosed systems can determine a dynamic dropout probability distribution associated with neurons of a neural network. The disclosed systems can further drop neurons from a neural network based on the dynamic dropout probability distribution to help neurons learn distinguishable features. In addition, the disclosed systems can utilize a cyclic learning rate routine to force copy weights of a copy neural network away from weights of an original neural network without decreasing prediction accuracy to ensure that the decision boundaries learned are different.

    GENERATING TRAINED NEURAL NETWORKS WITH INCREASED ROBUSTNESS AGAINST ADVERSARIAL ATTACKS

    公开(公告)号:US20200234110A1

    公开(公告)日:2020-07-23

    申请号:US16253561

    申请日:2019-01-22

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating trained neural network with increased robustness against adversarial attacks by utilizing a dynamic dropout routine and/or a cyclic learning rate routine. For example, the disclosed systems can determine a dynamic dropout probability distribution associated with neurons of a neural network. The disclosed systems can further drop neurons from a neural network based on the dynamic dropout probability distribution to help neurons learn distinguishable features. In addition, the disclosed systems can utilize a cyclic learning rate routine to force copy weights of a copy neural network away from weights of an original neural network without decreasing prediction accuracy to ensure that the decision boundaries learned are different.

Patent Agency Ranking