Error tolerant neural network model compression

    公开(公告)号:US10229356B1

    公开(公告)日:2019-03-12

    申请号:US14581969

    申请日:2014-12-23

    Abstract: Features are disclosed for error tolerant model compression. Such features could be used to reduce the size of a deep neural network model including several hidden node layers. The size reduction in an error tolerant fashion ensures predictive applications relying on the model do not experience performance degradation due to model compression. Such predictive applications include automatic recognition of speech, image recognition, and recommendation engines. Partially quantized models are re-trained such that any degradation of accuracy is “trained out” of the model providing improved error tolerance with compression.

    Robust neural network acoustic model with side task prediction of reference signals

    公开(公告)号:US10147442B1

    公开(公告)日:2018-12-04

    申请号:US14869803

    申请日:2015-09-29

    Abstract: A neural network acoustic model is trained to be robust and produce accurate output when used to process speech signals having acoustic interference. The neural network acoustic model can be trained using a source-separation process by which, in addition to producing the main acoustic model output for a given input, the neural network generates predictions of the separate speech and interference portions of the input. The parameters of the neural network can be adjusted to jointly optimize all three outputs (e.g., the main acoustic model output, the speech signal prediction, and the interference signal prediction), rather than only optimizing the main acoustic model output. Once trained, output layers for the speech and interference signal predictions can be removed from the neural network or otherwise disabled.

Patent Agency Ranking