System and methods to prevent poisoning attacks in machine learning systems in real time
Abstract:
Embodiments of the present invention provide a system and methods to prevent poisoning attacks in machine learning systems in real time. The invention includes methods for blocking the injection of abnormal data into training data sets used to train machine learning models for the identification of malfeasant activity by blocking certain data from entering the machine learning training dataset in real time, blocking certain interactions from being completed in real time, or placing holds on certain resources or users according to patterns detected by the ensemble of machine learning models. Various thresholds may be set manually or identified through the machine learning algorithm in order to determine which interactions or users should be blocked.
Information query
Patent Agency Ranking
0/0