Safety monitor for image misclassification
摘要:
Systems, apparatuses, and methods for implementing a safety monitor framework for a safety-critical inference application are disclosed. A system includes a safety-critical inference application, a safety monitor, and an inference accelerator engine. The safety monitor receives an input image, test data, and a neural network specification from the safety-critical inference application. The safety monitor generates a modified image by adding additional objects outside of the input image. The safety monitor provides the modified image and neural network specification to the inference accelerator engine which processes the modified image and provides outputs to the safety monitor. The safety monitor determines the likelihood of erroneous processing of the original input image by comparing the outputs for the additional objects with a known good result. The safety monitor complements the overall fault coverage of the inference accelerator engine and covers faults only observable at the network level.
公开/授权文献
信息查询
0/0