Systems and methods of training neural networks against adversarial attacks
Abstract:
Embodiments disclosed herein describe systems, methods, and products that generate trained neural networks that are robust against adversarial attacks. During a training phase, an illustrative computer may iteratively optimize a loss function that may include a penalty for ill-conditioned weight matrices in addition to a penalty for classification errors. Therefore, after the training phase, the trained neural network may include one or more well-conditioned weight matrices. The one or more well-conditioned weight matrices may minimize the effect of perturbations within an adversarial input thereby increasing the accuracy of classification of the adversarial input. By contrast, conventional training approaches may merely reduce the classification errors using backpropagation, and, as a result, any perturbation in an input is prone to generate a large effect on the output.
Information query
Patent Agency Ranking
0/0