-
1.
公开(公告)号:US11704569B2
公开(公告)日:2023-07-18
申请号:US16615097
申请日:2018-05-23
Applicant: INTEL CORPORATION
Inventor: Yiwen Guo , Anbang Yao , Hao Zhao , Ming Lu , Yurong Chen
Abstract: Methods and apparatus are disclosed for enhancing a binary weight neural network using a dependency tree. A method of enhancing a convolutional neural network (CNN) having binary weights includes constructing a tree for obtained binary tensors, the tree having a plurality of nodes beginning with a root node in each layer of the CNN. A convolution is calculated of an input feature map with an input binary tensor at the root node of the tree. A next node is searched from the root node of the tree and a convolution is calculated at the next node using a previous convolution result calculated at the root node of the tree. The searching of a next node from root node is repeated for all nodes from the root node of the tree, and a convolution is calculated at each next node using a previous convolution result.
-
公开(公告)号:US20250117639A1
公开(公告)日:2025-04-10
申请号:US18886625
申请日:2024-09-16
Applicant: Intel Corporation
Inventor: Anbang Yao , Aojun Zhou , Kuan Wang , Hao Zhao , Yurong Chen
IPC: G06N3/063 , G06F18/21 , G06F18/214 , G06N3/047 , G06N3/084
Abstract: Methods, apparatus, systems and articles of manufacture for loss-error-aware quantization of a low-bit neural network are disclosed. An example apparatus includes a network weight partitioner to partition unquantized network weights of a first network model into a first group to be quantized and a second group to be retrained. The example apparatus includes a loss calculator to process network weights to calculate a first loss. The example apparatus includes a weight quantizer to quantize the first group of network weights to generate low-bit second network weights. In the example apparatus, the loss calculator is to determine a difference between the first loss and a second loss. The example apparatus includes a weight updater to update the second group of network weights based on the difference. The example apparatus includes a network model deployer to deploy a low-bit network model including the low-bit second network weights.
-
公开(公告)号:US12112256B2
公开(公告)日:2024-10-08
申请号:US16982441
申请日:2018-07-26
Applicant: Intel Corporation , Anbang Yao , Aojun Zhou , Kuan Wang , Hao Zhao , Yurong Chen
Inventor: Anbang Yao , Aojun Zhou , Kuan Wang , Hao Zhao , Yurong Chen
IPC: G06N3/063 , G06F18/21 , G06F18/214 , G06N3/047 , G06N3/084
CPC classification number: G06N3/063 , G06F18/2148 , G06F18/217 , G06N3/047 , G06N3/084
Abstract: Methods, apparatus, systems and articles of manufacture for loss-error-aware quantization of a low-bit neural network are disclosed. An example apparatus includes a network weight partitioner to partition unquantized network weights of a first network model into a first group to be quantized and a second group to be retrained. The example apparatus includes a loss calculator to process network weights to calculate a first loss. The example apparatus includes a weight quantizer to quantize the first group of network weights to generate low-bit second network weights. In the example apparatus, the loss calculator is to determine a difference between the first loss and a second loss. The example apparatus includes a weight updater to update the second group of network weights based on the difference. The example apparatus includes a network model deployer to deploy a low-bit network model including the low-bit second network weights.
-
公开(公告)号:US11790631B2
公开(公告)日:2023-10-17
申请号:US17408094
申请日:2021-08-20
Applicant: Intel Corporation
Inventor: Anbang Yao , Yun Ren , Hao Zhao , Tao Kong , Yurong Chen
IPC: G06V10/00 , G06V10/44 , G06N3/04 , G06N3/08 , G06V30/24 , G06F18/243 , G06V30/19 , G06V10/82 , G06V20/70 , G06V20/10
CPC classification number: G06V10/454 , G06F18/24317 , G06N3/04 , G06N3/08 , G06V10/82 , G06V20/10 , G06V20/70 , G06V30/19173 , G06V30/2504
Abstract: An example apparatus for mining multi-scale hard examples includes a convolutional neural network to receive a mini-batch of sample candidates and generate basic feature maps. The apparatus also includes a feature extractor and combiner to generate concatenated feature maps based on the basic feature maps and extract the concatenated feature maps for each of a plurality of received candidate boxes. The apparatus further includes a sample scorer and miner to score the candidate samples with multi-task loss scores and select candidate samples with multi-task loss scores exceeding a threshold score.
-
公开(公告)号:US12154309B2
公开(公告)日:2024-11-26
申请号:US18462305
申请日:2023-09-06
Applicant: Intel Corporation
Inventor: Anbang Yao , Yun Ren , Hao Zhao , Tao Kong , Yurong Chen
IPC: G06V10/00 , G06F18/243 , G06N3/04 , G06N3/08 , G06V10/44 , G06V10/82 , G06V20/10 , G06V20/70 , G06V30/19 , G06V30/24
Abstract: An example apparatus for mining multi-scale hard examples includes a convolutional neural network to receive a mini-batch of sample candidates and generate basic feature maps. The apparatus also includes a feature extractor and combiner to generate concatenated feature maps based on the basic feature maps and extract the concatenated feature maps for each of a plurality of received candidate boxes. The apparatus further includes a sample scorer and miner to score the candidate samples with multi-task loss scores and select candidate samples with multi-task loss scores exceeding a threshold score.
-
公开(公告)号:US20240013506A1
公开(公告)日:2024-01-11
申请号:US18462305
申请日:2023-09-06
Applicant: Intel Corporation
Inventor: Anbang Yao , Yun Ren , Hao Zhao , Tao Kong , Yurong Chen
IPC: G06V10/44 , G06N3/04 , G06N3/08 , G06V30/24 , G06F18/243 , G06V30/19 , G06V10/82 , G06V20/70 , G06V20/10
CPC classification number: G06V10/454 , G06N3/04 , G06N3/08 , G06V30/2504 , G06F18/24317 , G06V30/19173 , G06V10/82 , G06V20/70 , G06V20/10
Abstract: An example apparatus for mining multi-scale hard examples includes a convolutional neural network to receive a mini-batch of sample candidates and generate basic feature maps. The apparatus also includes a feature extractor and combiner to generate concatenated feature maps based on the basic feature maps and extract the concatenated feature maps for each of a plurality of received candidate boxes. The apparatus further includes a sample scorer and miner to score the candidate samples with multi-task loss scores and select candidate samples with multi-task loss scores exceeding a threshold score.
-
公开(公告)号:US20210133518A1
公开(公告)日:2021-05-06
申请号:US16491735
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Anbang Yao , Yun Ren , Hao Zhao , Tao Kong , Yurong Chen
Abstract: An example apparatus for mining multi-scale hard examples includes a convolutional neural network to receive a mini-batch of sample candidates and generate basic feature maps. The apparatus also includes a feature extractor and combiner to generate concatenated feature maps based on the basic feature maps and extract the concatenated feature maps for each of a plurality of received candidate boxes. The apparatus further includes a sample scorer and miner to score the candidate samples with multi-task loss scores and select candidate samples with multi-task loss scores exceeding a threshold score.
-
8.
公开(公告)号:US20200082264A1
公开(公告)日:2020-03-12
申请号:US16609735
申请日:2018-05-22
Applicant: INTEL CORPORATION
Inventor: Yiwen Guo , Anbang Yao , Hao Zhao , Ming Lu , Yurong CHEN
Abstract: Methods and apparatus are disclosed for enhancing a neural network using binary tensor and scale factor pairs. For one example, a method of optimizing a trained convolutional neural network (CNN) includes initializing an approximation residue as a trained weight tensor for the trained CNN. A plurality of binary tensors and scale factor pairs are determined. The approximation residue is updated using the binary tensors and scale factor pairs.
-
9.
公开(公告)号:US11640526B2
公开(公告)日:2023-05-02
申请号:US16609735
申请日:2018-05-22
Applicant: INTEL CORPORATION
Inventor: Yiwen Guo , Anbang Yao , Hao Zhao , Ming Lu , Yurong Chen
Abstract: Methods and apparatus are disclosed for enhancing a neural network using binary tensor and scale factor pairs. For one example, a method of optimizing a trained convolutional neural network (CNN) includes initializing an approximation residue as a trained weight tensor for the trained CNN. A plurality of binary tensors and scale factor pairs are determined. The approximation residue is updated using the binary tensors and scale factor pairs.
-
公开(公告)号:US20220114825A1
公开(公告)日:2022-04-14
申请号:US17408094
申请日:2021-08-20
Applicant: Intel Corporation
Inventor: Anbang Yao , Yun Ren , Hao Zhao , Tao Kong , Yurong Chen
Abstract: An example apparatus for mining multi-scale hard examples includes a convolutional neural network to receive a mini-batch of sample candidates and generate basic feature maps. The apparatus also includes a feature extractor and combiner to generate concatenated feature maps based on the basic feature maps and extract the concatenated feature maps for each of a plurality of received candidate boxes. The apparatus further includes a sample scorer and miner to score the candidate samples with multi-task loss scores and select candidate samples with multi-task loss scores exceeding a threshold score.
-
-
-
-
-
-
-
-
-