-
公开(公告)号:US11120314B2
公开(公告)日:2021-09-14
申请号:US16491735
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Anbang Yao , Yun Ren , Hao Zhao , Tao Kong , Yurong Chen
Abstract: An example apparatus for mining multi-scale hard examples includes a convolutional neural network to receive a mini-batch of sample candidates and generate basic feature maps. The apparatus also includes a feature extractor and combiner to generate concatenated feature maps based on the basic feature maps and extract the concatenated feature maps for each of a plurality of received candidate boxes. The apparatus further includes a sample scorer and miner to score the candidate samples with multi-task loss scores and select candidate samples with multi-task loss scores exceeding a threshold score.
-
12.
公开(公告)号:US20200167654A1
公开(公告)日:2020-05-28
申请号:US16615097
申请日:2018-05-23
Applicant: INTEL CORPORATION
Inventor: Yiwen Guo , Anbang Yao , Hao Zhao , Ming Lu , Yurong CHEN
Abstract: Methods and apparatus are disclosed for enhancing a binary weight neural network using a dependency tree. A method of enhancing a convolutional neural network (CNN) having binary weights includes constructing a tree for obtained binary tensors, the tree having a plurality of nodes beginning with a root node in each layer of the CNN. A convolution is calculated of an input feature map with an input binary tensor at the root node of the tree. A next node is searched from the root node of the tree and a convolution is calculated at the next node using a previous convolution result calculated at the root node of the tree. The searching of a next node from root node is repeated for all nodes from the root node of the tree, and a convolution is calculated at each next node using a previous convolution result.
-
公开(公告)号:US12079713B2
公开(公告)日:2024-09-03
申请号:US18142997
申请日:2023-05-03
Applicant: Intel Corporation
Inventor: Anbang Yao , Hao Zhao , Ming Lu , Yiwen Guo , Yurong Chen
IPC: G06V10/82 , G06F18/214 , G06N3/04 , G06N3/063 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/94 , G06V20/10 , G06V20/40 , G06V20/70
CPC classification number: G06N3/063 , G06F18/214 , G06N3/04 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V10/955 , G06V20/10 , G06V20/41 , G06V20/70
Abstract: Methods and apparatus for discrimitive semantic transfer and physics-inspired optimization in deep learning are disclosed. A computation training method for a convolutional neural network (CNN) includes receiving a sequence of training images in the CNN of a first stage to describe objects of a cluttered scene as a semantic segmentation mask. The semantic segmentation mask is received in a semantic segmentation network of a second stage to produce semantic features. Using weights from the first stage as feature extractors and weights from the second stage as classifiers, edges of the cluttered scene are identified using the semantic features.
-
公开(公告)号:US11669718B2
公开(公告)日:2023-06-06
申请号:US16609732
申请日:2018-05-22
Applicant: INTEL CORPORATION
Inventor: Anbang Yao , Hao Zhao , Ming Lu , Yiwen Guo , Yurong Chen
IPC: G06V10/82 , G06N3/063 , G06N3/04 , G06N3/08 , G06F18/214 , G06V10/764 , G06V10/44 , G06V20/70 , G06V10/94 , G06V20/10 , G06V20/40
CPC classification number: G06N3/063 , G06F18/214 , G06N3/04 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V10/955 , G06V20/10 , G06V20/41 , G06V20/70
Abstract: Methods and apparatus for discrimitive semantic transfer and physics-inspired optimization in deep learning are disclosed. A computation training method for a convolutional neural network (CNN) includes receiving a sequence of training images in the CNN of a first stage to describe objects of a cluttered scene as a semantic segmentation mask. The semantic segmentation mask is received in a semantic segmentation network of a second stage to produce semantic features. Using weights from the first stage as feature extractors and weights from the second stage as classifiers, edges of the cluttered scene are identified using the semantic features.
-
公开(公告)号:US20210019630A1
公开(公告)日:2021-01-21
申请号:US16982441
申请日:2018-07-26
Applicant: Anbang YAO , Aojun ZHOU , Kuan WANG , Hao ZHAO , Yurong CHEN , Intel Corporation
Inventor: Anbang Yao , Aojun Zhou , Kuan Wang , Hao Zhao , Yurong Chen
Abstract: Methods, apparatus, systems and articles of manufacture for loss-error-aware quantization of a low-bit neural network are disclosed. An example apparatus includes a network weight partitioner to partition unquantized network weights of a first network model into a first group to be quantized and a second group to be retrained. The example apparatus includes a loss calculator to process network weights to calculate a first loss. The example apparatus includes a weight quantizer to quantize the first group of network weights to generate low-bit second network weights. In the example apparatus, the loss calculator is to determine a difference between the first loss and a second loss. The example apparatus includes a weight updater to update the second group of network weights based on the difference. The example apparatus includes a network model deployer to deploy a low-bit network model including the low-bit second network weights.
-
公开(公告)号:US20210019628A1
公开(公告)日:2021-01-21
申请号:US16981018
申请日:2018-07-23
Applicant: Intel Corporation
Inventor: Anbang Yao , Dawei Sun , Aojun Zhou , Hao Zhao , Yurong Chen
Abstract: Methods, systems, apparatus, and articles of manufacture are disclosed to train a neural network. An example apparatus includes an architecture evaluator to determine an architecture type of a neural network, a knowledge branch implementor to select a quantity of knowledge branches based on the architecture type, and a knowledge branch inserter to improve a training metric by appending the quantity of knowledge branches to respective layers of the neural network.
-
-
-
-
-