METHOD OF EVALUATING ROBUSTNESS OF ARTIFICIAL NEURAL NETWORK WATERMARKING AGAINST MODEL STEALING ATTACKS
Abstract:
Disclosed is a method of evaluating robustness of artificial neural network watermarking against model stealing attacks. The method of evaluating robustness of artificial neural network watermarking may include the steps of: training an artificial neural network model using training data and additional information for watermarking; collecting new training data for training a copy model of a structure the same as that of the trained artificial neural network model; training the copy model of the same structure by inputting the collected new training data into the copy model; and evaluating robustness of watermarking for the trained artificial neural network model through a model stealing attack executed on the trained copy model.
Information query
Patent Agency Ranking
0/0