-
公开(公告)号:US11635943B2
公开(公告)日:2023-04-25
申请号:US16475080
申请日:2017-04-07
Applicant: Intel Corporation
Inventor: Yiwen Guo , Anbang Yao , Dongqi Cai , Libin Wang , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng
Abstract: Described herein are hardware acceleration of random number generation for machine learning and deep learning applications. An apparatus (700) includes a uniform random number generator (URNG) circuit (710) to generate uniform random numbers and an adder circuit (750) that is coupled to the URNG circuit (710). The adder circuit hardware (750) accelerates generation of Gaussian random numbers for machine learning.
-
公开(公告)号:US12217163B2
公开(公告)日:2025-02-04
申请号:US18371934
申请日:2023-09-22
Applicant: Intel Corporation
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06K9/62 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06N3/063 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/82 , G06V10/94 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US20240086693A1
公开(公告)日:2024-03-14
申请号:US18371934
申请日:2023-09-22
Applicant: Intel Corporation
Inventor: Yiwen GUO , Yuqing Hou , Anbang YAO , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06N3/063 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/82 , G06V10/94 , G06V20/00
CPC classification number: G06N3/063 , G06F18/213 , G06F18/2148 , G06F18/217 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V10/94 , G06V10/955 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11803739B2
公开(公告)日:2023-10-31
申请号:US17584216
申请日:2022-01-25
Applicant: Intel Corporation
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06K9/62 , G06N3/063 , G06N3/08 , G06V10/94 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06V10/764 , G06V10/82 , G06V10/44 , G06V20/00
CPC classification number: G06N3/063 , G06F18/213 , G06F18/217 , G06F18/2148 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V10/94 , G06V10/955 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11537851B2
公开(公告)日:2022-12-27
申请号:US16475075
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Yiwen Guo , Anbang Yao , Dongqi Cai , Libin Wang , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen
Abstract: Methods and systems are disclosed using improved training and learning for deep neural networks. In one example, a deep neural network includes a plurality of layers, and each layer has a plurality of nodes. The nodes of each L layer in the plurality of layers are randomly connected to nodes of an L+1 layer. The nodes of each L+1 layer are connected to nodes in a subsequent L layer in a one-to-one manner. Parameters related to the nodes of each L layer are fixed. Parameters related to the nodes of each L+1 layers are updated. In another example, inputs for the input layer and labels for the output layer of a deep neural network are determined related to a first sample. A similarity between different pairs of inputs and labels is estimated using a Gaussian regression process.
-
公开(公告)号:US20220222492A1
公开(公告)日:2022-07-14
申请号:US17584216
申请日:2022-01-25
Applicant: Intel Corporation
Inventor: Yiwen GUO , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11263490B2
公开(公告)日:2022-03-01
申请号:US16475078
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11176632B2
公开(公告)日:2021-11-16
申请号:US16474540
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Anbang Yao , Dongqi Cai , Libin Wang , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yiwen Guo , Liu Yang , Yuqing Hou , Zhou Su
Abstract: Described herein are advanced artificial intelligence agents for modeling physical interactions. An apparatus to provide an active artificial intelligence (AI) agent includes at least one database to store physical interaction data and compute cluster coupled to the at least one database. The compute cluster automatically obtains physical interaction data from a data collection module without manual interaction, stores the physical interaction data in the at least one database, and automatically trains diverse sets of machine learning program units to simulate physical interactions with each individual program unit having a different model based on the applied physical interaction data.
-
公开(公告)号:US20210201078A1
公开(公告)日:2021-07-01
申请号:US16475079
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Anbang Yao , Shandong Wang , Wenhua Cheng , Dongqi Cai , Libin Wang , Lin Xu , Ping Hu , Yiwen Guo , Liu Yang , Yuging Hou , Zhou Su , Yurong Chen
Abstract: Methods and systems for advanced and augmented training of deep neural networks (DNNs) using synthetic data and innovative generative networks. A method includes training a DNN using synthetic data, training a plurality of DNNs using context data, associating features of the DNNs trained using context data with features of the DNN trained with synthetic data, and generating an augmented DNN using the associated features.
-
公开(公告)号:US20200285879A1
公开(公告)日:2020-09-10
申请号:US16651935
申请日:2017-11-08
Applicant: INTEL CORPORATION
Inventor: Wenhua Cheng , Anbang Yao , Libin Wang , Dongqi Cai , Jianguo Li , Yurong Chen
Abstract: A semiconductor package apparatus may include technology to apply a trained scene text detection network to an image to identify a core text region, a supportive text region, and a background region of the image, and detect text in the image based on the identified core text region and supportive text region. Other embodiments are disclosed and claimed.
-
-
-
-
-
-
-
-
-