Invention Application
- Patent Title: METHOD AND APPARATUS FOR REDUCING THE PARAMETER DENSITY OF A DEEP NEURAL NETWORK (DNN)
-
Application No.: US16328182Application Date: 2016-09-26
-
Publication No.: US20190197407A1Publication Date: 2019-06-27
- Inventor: Anbang YAO , Yiwen GUO , Lin XU , Yan LIN , Yurong CHEN
- Applicant: INTEL CORPORATION
- International Application: PCT/CN2016/100099 WO 20160926
- Main IPC: G06N3/08
- IPC: G06N3/08 ; G06N3/04 ; G06F17/16

Abstract:
An apparatus and method are described for reducing the parameter density of a deep neural network (DNN). A layer-wise pruning module to prune a specified set of parameters from each layer of a reference dense neural network model to generate a second neural network model having a relatively higher sparsity rate than the reference neural network model; a retraining module to retrain the second neural network model in accordance with a set of training data to generate a retrained second neural network model; and the retraining module to output the retrained second neural network model as a fmal neural network model if a target sparsity rate has been reached or to provide the retrained second neural network model to the layer-wise pruning model for additional pruning if the target sparsity rate has not been reached.
Public/Granted literature
- US11887001B2 Method and apparatus for reducing the parameter density of a deep neural network (DNN) Public/Granted day:2024-01-30
Information query