-
公开(公告)号:EP3906508A1
公开(公告)日:2021-11-10
申请号:EP19907690.2
申请日:2019-04-23
申请人: Intel Corporation
-
公开(公告)号:EP4369229A2
公开(公告)日:2024-05-15
申请号:EP24163021.9
申请日:2019-04-23
申请人: INTEL Corporation
发明人: POGORELIK, Oleg , NAYSHTUT, Alex , BEN-SHALOM, Omer , KLIMOV, Denis , KELLERMANN, Raizy , BARNHART-MAGEN, Guy , SUKHOMLINOV, Vadim
IPC分类号: G06F21/55
CPC分类号: G06N3/04 , G06N20/00 , G06F21/554 , G06N7/01
摘要: Techniques and apparatuses to harden AI systems against various attacks are provided. Among the different techniques and apparatuses, is provided, techniques and apparatuses that expand the domain for an inference model to include both visible classes and well as hidden classes. The hidden classes can be used to detect possible probing attacks against the model.
-
公开(公告)号:EP4202786A1
公开(公告)日:2023-06-28
申请号:EP22201179.3
申请日:2022-10-12
申请人: INTEL Corporation
发明人: LEVY, Dor , BEN-SHALOM, Omer , KELLERMANN, Raizy , NAYSHTUT, Alex
摘要: Adversarial sample protection for machine learning is described. An example of a storage medium includes instructions for initiating processing of examples for training of an inference engine in a system; dynamically selecting a subset of defensive preprocessing methods from a repository of defensive preprocessing methods for a current iteration of processing, wherein a subset of defensive preprocessing methods is selected for each iteration of processing; performing training of the inference engine with a plurality of examples, wherein the training of the inference engine include operation of the selected subset of defensive preprocessing methods; and performing an inference operation with the inference engine, including utilizing the selected subset of preprocessing defenses for the current iteration of processing.
-
公开(公告)号:EP4369229A3
公开(公告)日:2024-09-25
申请号:EP24163021.9
申请日:2019-04-23
申请人: INTEL Corporation
发明人: POGORELIK, Oleg , NAYSHTUT, Alex , BEN-SHALOM, Omer , KLIMOV, Denis , KELLERMANN, Raizy , BARNHART-MAGEN, Guy , SUKHOMLINOV, Vadim
摘要: Techniques and apparatuses to harden AI systems against various attacks are provided. Among the different techniques and apparatuses, is provided, techniques and apparatuses that expand the domain for an inference model to include both visible classes and well as hidden classes. The hidden classes can be used to detect possible probing attacks against the model.
-
-
-