-
公开(公告)号:US11120304B2
公开(公告)日:2021-09-14
申请号:US16929976
申请日:2020-07-15
申请人: Intel Corporation
摘要: A mechanism is described for facilitating the transfer of features learned by a context independent pre-trained deep neural network to a context dependent neural network. The mechanism includes extracting a feature learned by a first deep neural network (DNN) model via the framework, wherein the first DNN model is a pre-trained DNN model for computer vision to enable context-independent classification of an object within an input video frame and training, via the deep learning framework, a second DNN model for computer vision based on the extracted feature, the second DNN model an update of the first DNN model, wherein training the second DNN model includes training the second DNN model based on a dataset including context-dependent data.
-
公开(公告)号:US11663456B2
公开(公告)日:2023-05-30
申请号:US17400908
申请日:2021-08-12
申请人: Intel Corporation
IPC分类号: G06N3/063 , G06N3/08 , G06N3/04 , G06N3/084 , G06F9/46 , G06V10/94 , G06V10/44 , G06V20/00 , G06F18/214 , G06F18/2411 , G06F18/2413 , G06N3/044 , G06N3/045 , G06V10/764 , G06V10/774 , G06V10/82 , G06V40/16
CPC分类号: G06N3/063 , G06F9/46 , G06F18/214 , G06F18/2148 , G06F18/2411 , G06F18/2413 , G06N3/04 , G06N3/044 , G06N3/045 , G06N3/08 , G06N3/084 , G06V10/454 , G06V10/764 , G06V10/774 , G06V10/82 , G06V10/95 , G06V10/955 , G06V20/00 , G06V40/174 , G06V2201/06
摘要: A mechanism is described for facilitating the transfer of features learned by a context independent pre-trained deep neural network to a context dependent neural network. The mechanism includes extracting a feature learned by a first deep neural network (DNN) model via the framework, wherein the first DNN model is a pre-trained DNN model for computer vision to enable context-independent classification of an object within an input video frame and training, via the deep learning framework, a second DNN model for computer vision based on the extracted feature, the second DNN model an update of the first DNN model, wherein training the second DNN model includes training the second DNN model based on a dataset including context-dependent data.
-
公开(公告)号:US11501152B2
公开(公告)日:2022-11-15
申请号:US15659853
申请日:2017-07-26
申请人: Intel Corporation
摘要: A mechanism is described for facilitating learning and application of neural network topologies in machine learning at autonomous machines. A method of embodiments, as described herein, includes monitoring and detecting structure learning of neural networks relating to machine learning operations at a computing device having a processor, and generating a recursive generative model based on one or more topologies of one or more of the neural networks. The method may further include converting the generative model into a discriminative model.
-
公开(公告)号:US11238338B2
公开(公告)日:2022-02-01
申请号:US15494887
申请日:2017-04-24
申请人: Intel Corporation
发明人: Lev Faivishevsky , Tomer Bar-On , Yaniv Fais , Jacob Subag , Jeremie Dreyfuss , Amit Bleiweiss , Tomer Schwartz , Raanan Yonatan Yehezkel Rohekar , Michael Behar , Amital Armon , Uzi Sarel
摘要: In an example, an apparatus comprises a plurality of execution units comprising and logic, at least partially including hardware logic, to receive a plurality of data inputs for training a neural network, wherein the data inputs comprise training data and weights inputs; represent the data inputs in a first form; and represent the weight inputs in a second form. Other embodiments are also disclosed and claimed.
-
公开(公告)号:US11698930B2
公开(公告)日:2023-07-11
申请号:US16014495
申请日:2018-06-21
申请人: INTEL CORPORATION
IPC分类号: G06F16/901 , G06N3/082 , G06F18/2137 , G06F18/21 , G06N3/045 , G06N5/01 , G06N7/01
CPC分类号: G06F16/9024 , G06F16/9027 , G06F18/2137 , G06F18/2163 , G06N3/045 , G06N3/082 , G06N5/01 , G06N7/01
摘要: Various embodiments are generally directed to techniques for determining artificial neural network topologies, such as by utilizing probabilistic graphical models, for instance. Some embodiments are particularly related to determining neural network topologies by bootstrapping a graph, such as a probabilistic graphical model, into a multi-graphical model, or graphical model tree. Various embodiments may include logic to determine a collection of sample sets from a dataset. In various such embodiments, each sample set may be drawn randomly for the dataset with replacement between drawings. In some embodiments, logic may partition a graph into multiple subgraph sets based on each of the sample sets. In several embodiments, the multiple subgraph sets may be scored, such as with Bayesian statistics, and selected amongst as part of determining a topology for a neural network.
-
公开(公告)号:US20220076118A1
公开(公告)日:2022-03-10
申请号:US17404153
申请日:2021-08-17
申请人: Intel Corporation
发明人: Lev Faivishevsky , Tomer Bar-On , Yaniv Fais , Jacob Subag , Jeremie Dreyfuss , Amit Bleiweiss , Tomer Schwartz , Raanan Yonatan Yehezkel Rohekar , Michael Behar , Amitai Armon , Uzi Sarel
摘要: In an example, an apparatus comprises a plurality of execution units comprising and logic, at least partially including hardware logic, to receive a plurality of data inputs for training a neural network, wherein the data inputs comprise training data and weights inputs; represent the data inputs in a first form; and represent the weight inputs in a second form. Other embodiments are also disclosed and claimed.
-
公开(公告)号:US20190042911A1
公开(公告)日:2019-02-07
申请号:US15853403
申请日:2017-12-22
申请人: Intel Corporation
IPC分类号: G06N3/04
摘要: A recursive method and apparatus produce a deep convolution neural network (CNN). The method iteratively processes an input directed acyclic graph (DAG) representing an initial CNN, a set of nodes, a set of exogenous nodes, and a resolution based on the CNN. An iteration for a node may include recursively performing the iteration upon each node in a descendant node set to create a descendant DAG, and upon each node in ancestor node sets to create ancestor DAGs, the ancestor node sets being a remainder of nodes in the temporary DAG after removing nodes of the descendent node set. The descendant and ancestor DAGs are merged, and a latent layer is created that includes a latent node for each ancestor node set. Each latent node is set to be a parent of sets of parentless nodes in a combined descendant DAG and ancestors DAGs before returning.
-
公开(公告)号:US20230394305A1
公开(公告)日:2023-12-07
申请号:US18325744
申请日:2023-05-30
申请人: Intel Corporation
发明人: Lev Faivishevsky , Tomer Bar-On , Yaniv Fais , Jacob Subag , Jeremie Dreyfuss , Amit Bleiweiss , Tomer Schwartz , Raanan Yonatan Yehezkel Rohekar , Michael Behar , Amitai Armon , Uzi Sarel
摘要: In an example, an apparatus comprises a plurality of execution units comprising and logic, at least partially including hardware logic, to receive a plurality of data inputs for training a neural network, wherein the data inputs comprise training data and weights inputs; represent the data inputs in a first form; and represent the weight inputs in a second form. Other embodiments are also disclosed and claimed.
-
公开(公告)号:US11010658B2
公开(公告)日:2021-05-18
申请号:US15853403
申请日:2017-12-22
申请人: Intel Corporation
摘要: A recursive method and apparatus produce a deep convolution neural network (CNN). The method iteratively processes an input directed acyclic graph (DAG) representing an initial CNN, a set of nodes, a set of exogenous nodes, and a resolution based on the CNN. An iteration for a node may include recursively performing the iteration upon each node in a descendant node set to create a descendant DAG, and upon each node in ancestor node sets to create ancestor DAGs, the ancestor node sets being a remainder of nodes in the temporary DAG after removing nodes of the descendent node set. The descendant and ancestor DAGs are merged, and a latent layer is created that includes a latent node for each ancestor node set. Each latent node is set to be a parent of sets of parentless nodes in a combined descendant DAG and ancestors DAGs before returning.
-
公开(公告)号:US11704564B2
公开(公告)日:2023-07-18
申请号:US17404153
申请日:2021-08-17
申请人: Intel Corporation
发明人: Lev Faivishevsky , Tomer Bar-On , Yaniv Fais , Jacob Subag , Jeremie Dreyfuss , Amit Bleiweiss , Tomer Schwartz , Raanan Yonatan Yehezkel Rohekar , Michael Behar , Amitai Armon , Uzi Sarel
摘要: In an example, an apparatus comprises a plurality of execution units comprising and logic, at least partially including hardware logic, to receive a plurality of data inputs for training a neural network, wherein the data inputs comprise training data and weights inputs; represent the data inputs in a first form; and represent the weight inputs in a second form. Other embodiments are also disclosed and claimed.
-
-
-
-
-
-
-
-
-