System and methods to mitigate poisoning attacks within machine learning systems
Abstract:
Embodiments of the present invention provide a system and methods to mitigate poisoning attacks within machine learning systems. The invention includes an improved data analysis approach to train an ensemble of machine learning models to analyze received data and label the data in a non-binary fashion to indicate likelihood that certain data has been injected abnormally and should not be used for training purposes. The resulting dataset from the ensemble is assessed to determine convergence of model labeling and to detect outlier data labeling among models in the ensemble. Confidence scores for clustered interaction data may be performed on varied sets of training data populations and using a number of models. Output from the various training/model mixes are fed to a machine learning model to compare ensemble accuracy between different model sets and select the most accurate ensemble combination.
Information query
Patent Agency Ranking
0/0