-
1.
公开(公告)号:US20190197407A1
公开(公告)日:2019-06-27
申请号:US16328182
申请日:2016-09-26
Applicant: INTEL CORPORATION
Inventor: Anbang YAO , Yiwen GUO , Lin XU , Yan LIN , Yurong CHEN
CPC classification number: G06N3/082 , G06F17/16 , G06N3/02 , G06N3/04 , G06N3/0445 , G06N3/0454 , G06N3/084
Abstract: An apparatus and method are described for reducing the parameter density of a deep neural network (DNN). A layer-wise pruning module to prune a specified set of parameters from each layer of a reference dense neural network model to generate a second neural network model having a relatively higher sparsity rate than the reference neural network model; a retraining module to retrain the second neural network model in accordance with a set of training data to generate a retrained second neural network model; and the retraining module to output the retrained second neural network model as a fmal neural network model if a target sparsity rate has been reached or to provide the retrained second neural network model to the layer-wise pruning model for additional pruning if the target sparsity rate has not been reached.