-
公开(公告)号:US11107189B2
公开(公告)日:2021-08-31
申请号:US16474927
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Shandong Wang , Yiwen Guo , Anbang Yao , Dongqi Cai , Libin Wang , Lin Xu , Ping Hu , Wenhua Cheng , Yurong Chen
IPC: G06K9/00 , G06T3/40 , G06N20/20 , G06N20/10 , G06K9/62 , G06N3/04 , G06N3/08 , G06N5/04 , G06T1/20
Abstract: Methods and systems are disclosed using improved Convolutional Neural Networks (CNN) for image processing. In one example, an input image is down-sampled into smaller images with a smaller resolution than the input image. The down-sampled smaller images are processed by a CNN having a last layer with a reduced number of nodes than a last layer of a full CNN used to process the input image at a full resolution. A result is outputted based on the processed down-sampled smaller images by the CNN having a last layer with a reduced number of nodes. In another example, shallow CNN networks are built randomly. The randomly built shallow CNN networks are combined to imitate a trained deep neural network (DNN).
-
公开(公告)号:US10685262B2
公开(公告)日:2020-06-16
申请号:US15551870
申请日:2015-03-20
Applicant: Intel Corporation
Inventor: Anbang Yao , Lin Xu , Jianguo Li , Yurong Chen
Abstract: Techniques related to implementing convolutional neural networks for object recognition are discussed. Such techniques may include generating a set of binary neural features via convolutional neural network layers based on input image data and applying a strong classifier to the set of binary neural features to generate an object label for the input image data.
-
公开(公告)号:US20200026988A1
公开(公告)日:2020-01-23
申请号:US16475075
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Yiwen Guo , Anbang Yao , Dongqi Cai , Libin Wang , Lin Xu , Ping Hu , Shangong Wang , Wenhua Cheng , Wenhua Cheng , Yurong Chen
Abstract: Methods and systems are disclosed using improved training and learning for deep neural networks. In one example, a deep neural network includes a plurality of layers, and each layer has a plurality of nodes. For each L layer in the plurality of layers, the nodes of each L layer are randomly connected to nodes in a L+1 layer. For each L+1 layer in the plurality of layers, the nodes of each L+1 layer are connected to nodes in a subsequent L layer in a one-to-one manner. Parameters related to the nodes of each L layer are fixed. Parameters related to the nodes of each L+1 layers are updated, and L is an integer starting with 1. In another example, a deep neural network includes an input layer, output layer, and a plurality of hidden layers. Inputs for the input layer and labels for the output layer are determined related to a first sample. Similarity between different pairs of inputs and labels between a second sample with the first sample is estimated using Gaussian regression process.
-
公开(公告)号:US20200026965A1
公开(公告)日:2020-01-23
申请号:US16475078
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Yiwen GUO , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shangong Wang , Wenhua Cheng , Yurong Chen , Libin Wag
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
25.
公开(公告)号:US20200026499A1
公开(公告)日:2020-01-23
申请号:US16475080
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Yiwen Guo , Anbang Yao , Dongqi Cai , Libin Wang , Lin Xu , Ping Hu , Shangong Wang , Wenhua Cheng
Abstract: Described herein are hardware acceleration of random number generation for machine learning and deep learning applications. An apparatus (700) includes a uniform random number generator (URNG) circuit (710) to generate uniform random numbers and an adder circuit (750) that is coupled to the URNG circuit (710). The adder circuit hardware (750) accelerates generation of Gaussian random numbers for machine learning.
-
公开(公告)号:US20180032844A1
公开(公告)日:2018-02-01
申请号:US15551870
申请日:2015-03-20
Applicant: Intel Corporation
Inventor: Anbang Yao , Lin Xu , Jianguo Li , Yurong Chen
CPC classification number: G06K9/6269 , G06K9/00362 , G06K9/4619 , G06K9/6256 , G06K9/6286 , G06K9/66 , G06K9/80 , G06N3/0454 , G06N3/08
Abstract: Techniques related to implementing convolutional neural networks for object recognition are discussed. Such techniques may include generating a set of binary neural features via convolutional neural network layers based on input image data and applying a strong classifier to the set of binary neural features to generate an object label for the input image data.
-
-
-
-
-