PERFORMANCE OF NEURAL NETWORKS USING LEARNED SPECIALIZED TRANSFORMATION FUNCTIONS

    公开(公告)号:US20210042625A1

    公开(公告)日:2021-02-11

    申请号:US16534856

    申请日:2019-08-07

    Applicant: ADOBE INC.

    Abstract: Methods and systems are provided for facilitating the creation and utilization of a transformation function system capable of providing network agnostic performance improvement. The transformation function system receives a representation from a task neural network. The representation can be input into a composite function neural network of the transformation function system. A learned composite function can be generated using the composite function neural network. The composite function can be specifically constructed for the task neural network based on the input representation. The learned composite function can be applied to a feature embedding of the task neural network to transform the feature embedding. Transforming the feature embedding can optimize the output of the task neural network.

    SYSTEMS AND METHODS OF TRAINING NEURAL NETWORKS AGAINST ADVERSARIAL ATTACKS

    公开(公告)号:US20220292356A1

    公开(公告)日:2022-09-15

    申请号:US17805405

    申请日:2022-06-03

    Applicant: ADOBE INC.

    Abstract: Embodiments disclosed herein describe systems, methods, and products that generate trained neural networks that are robust against adversarial attacks. During a training phase, an illustrative computer may iteratively optimize a loss function that may include a penalty for ill-conditioned weight matrices in addition to a penalty for classification errors. Therefore, after the training phase, the trained neural network may include one or more well-conditioned weight matrices. The one or more well-conditioned weight matrices may minimize the effect of perturbations within an adversarial input thereby increasing the accuracy of classification of the adversarial input. By contrast, conventional training approaches may merely reduce the classification errors using backpropagation, and, as a result, any perturbation in an input is prone to generate a large effect on the output.

Patent Agency Ranking