-
公开(公告)号:US20200327443A1
公开(公告)日:2020-10-15
申请号:US16378942
申请日:2019-04-09
Applicant: NXP B.V.
Inventor: CHRISTINE VAN VREDENDAAL , Nikita Veshchikov , Wilhelmus Petrus Adrianus Johannus Michiels
Abstract: A method for protecting a machine learning model is provided. In the method, a first machine learning model is trained, and a plurality of machine learning models derived from the first machine learning model is trained. Each of the plurality of machine learning models may be different from the first machine learning model. During inference operation, a first input sample is provided to the first machine learning model and to each of the plurality of machine learning models. The first machine learning model generates a first output and the plurality of machine learning models generates a plurality of second outputs. The plurality of second outputs are aggregated to determine a final output. The final output and the first output are classified to determine if the first input sample is an adversarial input. If it is adversarial input, a randomly generated output is provided instead of the first output.
-
公开(公告)号:US20210019661A1
公开(公告)日:2021-01-21
申请号:US16511082
申请日:2019-07-15
Applicant: NXP B.V.
Inventor: JOPPE WILLEM BOS , SIMON JOHANN FRIEDBERGER , NIKITA VESHCHIKOV , CHRISTINE VAN VREDENDAAL
Abstract: A method is provided for detecting copying of a machine learning model. In the method, the first machine learning model is divided into a plurality of portions. Intermediate outputs from a hidden layer of a selected one of the plurality of portions is compared to corresponding outputs from a second machine learning model to detect the copying. Alternately, a first seal may be generated using the plurality of inputs and the intermediate outputs from nodes of the selected portion. A second seal from a suspected copy that has been generated the same way is compared to the first seal to detect the copying. If the first and second seals are the same, then there is a high likelihood that the suspected copy is an actual copy. By using the method, only the intermediate outputs of the machine learning model outputs have to be disclosed to others, thus protecting the confidentiality of the model.
-