Model interpretability using proxy features
Abstract:
In one embodiment, a service identifies a set of attributes associated with a first machine learning model trained to make an inference about a computer network. The service obtains labels for each of the set of attributes, each label indicating whether its corresponding attribute is a probable cause of the inference. The service maps input features of the first machine learning model to those attributes in the set of attributes that were labeled as probable causes of the inference. The service generates a second machine learning model in part by using the mapped attributes to form a set of input features for the second machine learning model, whereby the input features of the first machine learning model and the input features of the second machine learning model differ.
Public/Granted literature
Information query
Patent Agency Ranking
0/0