-
公开(公告)号:US11635943B2
公开(公告)日:2023-04-25
申请号:US16475080
申请日:2017-04-07
Applicant: Intel Corporation
Inventor: Yiwen Guo , Anbang Yao , Dongqi Cai , Libin Wang , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng
Abstract: Described herein are hardware acceleration of random number generation for machine learning and deep learning applications. An apparatus (700) includes a uniform random number generator (URNG) circuit (710) to generate uniform random numbers and an adder circuit (750) that is coupled to the URNG circuit (710). The adder circuit hardware (750) accelerates generation of Gaussian random numbers for machine learning.
-
公开(公告)号:US20220207678A1
公开(公告)日:2022-06-30
申请号:US17482998
申请日:2021-09-23
Applicant: Intel Corporation
Inventor: Anbang Yao , Ming Lu , Yikai Wang , Shandong Wang , Yurong Chen , Sungye Kim , Attila Tamas Afra
Abstract: The present disclosure provides an apparatus and method of guided neural network model for image processing. An apparatus may comprise a guidance map generator, a synthesis network and an accelerator. The guidance map generator may receive a first image as a content image and a second image as a style image, and generate a first plurality of guidance maps and a second plurality of guidance maps, respectively from the first image and the second image. The synthesis network may synthesize the first plurality of guidance maps and the second plurality of guidance maps to determine guidance information. The accelerator may generate an output image by applying the style of the second image to the first image based on the guidance information.
-
公开(公告)号:US11106896B2
公开(公告)日:2021-08-31
申请号:US16958542
申请日:2018-03-26
Applicant: INTEL CORPORATION
Inventor: Ping Hu , Anbang Yao , Yurong Chen , Dongqi Cai , Shandong Wang
Abstract: Methods and apparatus for multi-task recognition using neural networks are disclosed. An example apparatus includes a filter engine to generate a facial identifier feature map based on image data, the facial identifier feature map to identify a face within the image data. The example apparatus also includes a sibling semantic engine to process the facial identifier feature map to generate an attribute feature map associated with a facial attribute. The example apparatus also includes a task loss engine to calculate a probability factor for the attribute, the probability factor identifying the facial attribute. The example apparatus also includes a report generator to generate a report indicative of a classification of the facial attribute.
-
公开(公告)号:US20210004572A1
公开(公告)日:2021-01-07
申请号:US16958542
申请日:2018-03-26
Applicant: INTEL CORPORATION
Inventor: Ping Hu , Anbang Yao , Yurong Chen , Dongqi Cai , Shandong Wang
Abstract: Methods and apparatus for multi-task recognition using neural networks are disclosed. An example apparatus includes a filter engine to generate a facial identifier feature map based on image data, the facial identifier feature map to identify a face within the image data. The example apparatus also includes a sibling semantic engine to process the facial identifier feature map to generate an attribute feature map associated with a facial attribute. The example apparatus also includes a task loss engine to calculate a probability factor for the attribute, the probability factor identifying the facial attribute. The example apparatus also includes a report generator to generate a report indicative of a classification of the facial attribute.
-
公开(公告)号:US20250068916A1
公开(公告)日:2025-02-27
申请号:US18725028
申请日:2022-02-21
Applicant: Intel Corporation
Inventor: Yurong Chen , Anbang Yao , Yi Qian , Yu Zhang , Shandong Wang
IPC: G06N3/088
Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for teacher-free self-feature distillation training of machine-learning (ML) models. An example apparatus includes at least one memory, instructions, and processor circuitry to at least one of execute or instantiate the instructions to perform a first comparison of (i) a first group of a first set of feature channels (FCs) of an ML model and (ii) a second group of the first set, perform a second comparison of (iii) a first group of a second set of FCs of the ML model and one of (iv) a third group of the first set or a first group of a third set of FCs of the ML model, adjust parameter(s) of the ML model based on the first and/or second comparisons, and, in response to an error value satisfying a threshold, deploy the ML model to execute a workload based on the parameter(s).
-
公开(公告)号:US12217163B2
公开(公告)日:2025-02-04
申请号:US18371934
申请日:2023-09-22
Applicant: Intel Corporation
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06K9/62 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06N3/063 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/82 , G06V10/94 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US20240086693A1
公开(公告)日:2024-03-14
申请号:US18371934
申请日:2023-09-22
Applicant: Intel Corporation
Inventor: Yiwen GUO , Yuqing Hou , Anbang YAO , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06N3/063 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/82 , G06V10/94 , G06V20/00
CPC classification number: G06N3/063 , G06F18/213 , G06F18/2148 , G06F18/217 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V10/94 , G06V10/955 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11869171B2
公开(公告)日:2024-01-09
申请号:US17090170
申请日:2020-11-05
Applicant: Intel Corporation
Inventor: Anbang Yao , Ming Lu , Yikai Wang , Xiaoming Chen , Junjie Huang , Tao Lv , Yuanke Luo , Yi Yang , Feng Chen , Zhiming Wang , Zhiqiao Zheng , Shandong Wang
CPC classification number: G06T5/002 , G06N3/04 , G06T2207/20081 , G06T2207/20084
Abstract: Embodiments are generally directed to an adaptive deformable kernel prediction network for image de-noising. An embodiment of a method for de-noising an image by a convolutional neural network implemented on a compute engine, the image including a plurality of pixels, the method comprising: for each of the plurality of pixels of the image, generating a convolutional kernel having a plurality of kernel values for the pixel; generating a plurality of offsets for the pixel respectively corresponding to the plurality of kernel values, each of the plurality of offsets to indicate a deviation from a pixel position of the pixel; determining a plurality of deviated pixel positions based on the pixel position of the pixel and the plurality of offsets; and filtering the pixel with the convolutional kernel and pixel values of the plurality of deviated pixel positions to obtain a de-noised pixel.
-
公开(公告)号:US11803739B2
公开(公告)日:2023-10-31
申请号:US17584216
申请日:2022-01-25
Applicant: Intel Corporation
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06K9/62 , G06N3/063 , G06N3/08 , G06V10/94 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06V10/764 , G06V10/82 , G06V10/44 , G06V20/00
CPC classification number: G06N3/063 , G06F18/213 , G06F18/217 , G06F18/2148 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V10/94 , G06V10/955 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11308675B2
公开(公告)日:2022-04-19
申请号:US16971132
申请日:2018-06-14
Applicant: Intel Corporation
Inventor: Shandong Wang , Ming Lu , Anbang Yao , Yurong Chen
Abstract: Techniques related to capturing 3D faces using image and temporal tracking neural networks and modifying output video using the captured 3D faces are discussed. Such techniques include applying a first neural network to an input vector corresponding to a first video image having a representation of a human face to generate a morphable model parameter vector, applying a second neural network to an input vector corresponding to a first and second temporally subsequent to generate a morphable model parameter delta vector, generating a 3D face model of the human face using the morphable model parameter vector and the morphable model parameter delta vector, and generating output video using the 3D face model.
-
-
-
-
-
-
-
-
-