-
公开(公告)号:US11887001B2
公开(公告)日:2024-01-30
申请号:US16328182
申请日:2016-09-26
Applicant: Intel Corporation
Inventor: Anbang Yao , Yiwen Guo , Lin Xu , Yan Lin , Yurong Chen
CPC classification number: G06N3/082 , G06F17/16 , G06N3/02 , G06N3/04 , G06N3/045 , G06N3/084 , G06N3/044
Abstract: An apparatus and method are described for reducing the parameter density of a deep neural network (DNN). A layer-wise pruning module to prune a specified set of parameters from each layer of a reference dense neural network model to generate a second neural network model having a relatively higher sparsity rate than the reference neural network model; a retraining module to retrain the second neural network model in accordance with a set of training data to generate a retrained second neural network model; and the retraining module to output the retrained second neural network model as a final neural network model if a target sparsity rate has been reached or to provide the retrained second neural network model to the layer-wise pruning model for additional pruning if the target sparsity rate has not been reached.
-
公开(公告)号:US12217163B2
公开(公告)日:2025-02-04
申请号:US18371934
申请日:2023-09-22
Applicant: Intel Corporation
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06K9/62 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06N3/063 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/82 , G06V10/94 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US20240086693A1
公开(公告)日:2024-03-14
申请号:US18371934
申请日:2023-09-22
Applicant: Intel Corporation
Inventor: Yiwen GUO , Yuqing Hou , Anbang YAO , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06N3/063 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/82 , G06V10/94 , G06V20/00
CPC classification number: G06N3/063 , G06F18/213 , G06F18/2148 , G06F18/217 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V10/94 , G06V10/955 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11803739B2
公开(公告)日:2023-10-31
申请号:US17584216
申请日:2022-01-25
Applicant: Intel Corporation
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06K9/62 , G06N3/063 , G06N3/08 , G06V10/94 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06V10/764 , G06V10/82 , G06V10/44 , G06V20/00
CPC classification number: G06N3/063 , G06F18/213 , G06F18/217 , G06F18/2148 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V10/94 , G06V10/955 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US10430694B2
公开(公告)日:2019-10-01
申请号:US15554208
申请日:2015-04-15
Applicant: Intel Corporation
Inventor: Anbang Yao , Lin Xu , Yurong Chen
Abstract: Techniques related to performing skin detection in an image are discussed. Such techniques may include generating skin and non-skin models based on a skin dominant region and another region, respectively, of the image and classifying individual pixels of the image via a discriminative skin likelihood function based on the skin model and the non-skin model.
-
公开(公告)号:US11645737B2
公开(公告)日:2023-05-09
申请号:US17497555
申请日:2021-10-08
Applicant: INTEL CORPORATION
Inventor: Liu Yang , Weike Chen , Lin Xu
CPC classification number: G06T5/002 , G06T5/005 , G06T5/20 , G06T2207/20028 , G06T2207/30088 , G06T2207/30201
Abstract: Skin smoothing is applied to images using a bilateral filter and aided by a skin map. In one example a method includes receiving an image having pixels at an original resolution. The image is buffered. The image is downscaled from the original resolution to a lower resolution. A bilateral filter is applied to pixels of the downscaled image. The filtered pixels of the downscaled image are blended with pixels of the image having the original resolution, and the blended image is produced.
-
公开(公告)号:US11537851B2
公开(公告)日:2022-12-27
申请号:US16475075
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Yiwen Guo , Anbang Yao , Dongqi Cai , Libin Wang , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen
Abstract: Methods and systems are disclosed using improved training and learning for deep neural networks. In one example, a deep neural network includes a plurality of layers, and each layer has a plurality of nodes. The nodes of each L layer in the plurality of layers are randomly connected to nodes of an L+1 layer. The nodes of each L+1 layer are connected to nodes in a subsequent L layer in a one-to-one manner. Parameters related to the nodes of each L layer are fixed. Parameters related to the nodes of each L+1 layers are updated. In another example, inputs for the input layer and labels for the output layer of a deep neural network are determined related to a first sample. A similarity between different pairs of inputs and labels is estimated using a Gaussian regression process.
-
公开(公告)号:US20220222492A1
公开(公告)日:2022-07-14
申请号:US17584216
申请日:2022-01-25
Applicant: Intel Corporation
Inventor: Yiwen GUO , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11263490B2
公开(公告)日:2022-03-01
申请号:US16475078
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11176641B2
公开(公告)日:2021-11-16
申请号:US16079308
申请日:2016-03-24
Applicant: INTEL CORPORATION
Inventor: Liu Yang , Weike Chen , Lin Xu
Abstract: Skin smoothing is applied to images using a bilateral filter and aided by a skin map. In one example a method includes receiving an image having pixels at an original resolution. The image is buffered. The image is downscaled from the original resolution to a lower resolution. A bilateral filter is applied to pixels of the downscaled image. The filtered pixels of the downscaled image are blended with pixels of the image having the original resolution, and the blended image is produced.
-
-
-
-
-
-
-
-
-