-
公开(公告)号:US11698930B2
公开(公告)日:2023-07-11
申请号:US16014495
申请日:2018-06-21
Applicant: INTEL CORPORATION
Inventor: Yaniv Gurwicz , Raanan Yonatan Yehezkel Rohekar , Shami Nisimov , Guy Koren , Gal Novik
IPC: G06F16/901 , G06N3/082 , G06F18/2137 , G06F18/21 , G06N3/045 , G06N5/01 , G06N7/01
CPC classification number: G06F16/9024 , G06F16/9027 , G06F18/2137 , G06F18/2163 , G06N3/045 , G06N3/082 , G06N5/01 , G06N7/01
Abstract: Various embodiments are generally directed to techniques for determining artificial neural network topologies, such as by utilizing probabilistic graphical models, for instance. Some embodiments are particularly related to determining neural network topologies by bootstrapping a graph, such as a probabilistic graphical model, into a multi-graphical model, or graphical model tree. Various embodiments may include logic to determine a collection of sample sets from a dataset. In various such embodiments, each sample set may be drawn randomly for the dataset with replacement between drawings. In some embodiments, logic may partition a graph into multiple subgraph sets based on each of the sample sets. In several embodiments, the multiple subgraph sets may be scored, such as with Bayesian statistics, and selected amongst as part of determining a topology for a neural network.
-
公开(公告)号:US20220027704A1
公开(公告)日:2022-01-27
申请号:US17366919
申请日:2021-07-02
Applicant: Intel Corporation
Inventor: Yonatan Glesner , Gal Novik , Dmitri Vainbrand , Gal Leibovich
Abstract: Methods and apparatus relating to techniques for incremental network quantization. In an example, an apparatus comprises logic, at least partially comprising hardware logic to determine a plurality of weights for a layer of a convolutional neural network (CNN) comprising a plurality of kernels; organize the plurality of weights into a plurality of clusters for the plurality of kernels; and apply a K-means compression algorithm to each of the plurality of clusters. Other embodiments are also disclosed and claimed.
-
公开(公告)号:US20190042911A1
公开(公告)日:2019-02-07
申请号:US15853403
申请日:2017-12-22
Applicant: Intel Corporation
Inventor: Guy Koren , Raanan Yonatan Yehezkel Rohekar , Shami Nisimov , Gal Novik
IPC: G06N3/04
Abstract: A recursive method and apparatus produce a deep convolution neural network (CNN). The method iteratively processes an input directed acyclic graph (DAG) representing an initial CNN, a set of nodes, a set of exogenous nodes, and a resolution based on the CNN. An iteration for a node may include recursively performing the iteration upon each node in a descendant node set to create a descendant DAG, and upon each node in ancestor node sets to create ancestor DAGs, the ancestor node sets being a remainder of nodes in the temporary DAG after removing nodes of the descendent node set. The descendant and ancestor DAGs are merged, and a latent layer is created that includes a latent node for each ancestor node set. Each latent node is set to be a parent of sets of parentless nodes in a combined descendant DAG and ancestors DAGs before returning.
-
公开(公告)号:US20230117143A1
公开(公告)日:2023-04-20
申请号:US18053538
申请日:2022-11-08
Applicant: Intel Corporation
Inventor: RAANAN YONATAN YEHEZKEL ROHEKAR , Guy Koren , Shami Nisimov , Gal Novik
Abstract: A mechanism is described for facilitating learning and application of neural network topologies in machine learning at autonomous machines. A method of embodiments, as described herein, includes monitoring and detecting structure learning of neural networks relating to machine learning operations at a computing device having a processor, and generating a recursive generative model based on one or more topologies of one or more of the neural networks. The method may further include converting the generative model into a discriminative model.
-
公开(公告)号:US20210350585A1
公开(公告)日:2021-11-11
申请号:US17344639
申请日:2021-06-10
Applicant: INTEL CORPORATION
Inventor: Tomer Bar-On , Jacob Subag , Yaniv Fais , Jeremie Dreyfuss , Gal Novik , Gal Leibovich , Tomer Schwartz , Ehud Cohen , Lev Faivishevsky , Uzi Sarel , Amitai Armon , Yahav Shadmiy
IPC: G06T9/00 , H04N19/42 , G06N3/04 , H04N19/436 , G06N3/08
Abstract: In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer. Other embodiments are also disclosed and claimed.
-
公开(公告)号:US11037330B2
公开(公告)日:2021-06-15
申请号:US15482725
申请日:2017-04-08
Applicant: Intel Corporation
Inventor: Tomer Bar-On , Jacob Subag , Yaniv Fais , Jeremie Dreyfuss , Gal Novik , Gal Leibovich , Tomer Schwartz , Ehud Cohen , Lev Faivishevsky , Uzi Sarel , Amitai Armon , Yahav Shadmiy
IPC: G06T9/00 , H04N19/42 , G06N3/04 , H04N19/436 , G06N3/08
Abstract: In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer. Other embodiments are also disclosed and claimed.
-
公开(公告)号:US11010658B2
公开(公告)日:2021-05-18
申请号:US15853403
申请日:2017-12-22
Applicant: Intel Corporation
Inventor: Guy Koren , Raanan Yonatan Yehezkel Rohekar , Shami Nisimov , Gal Novik
Abstract: A recursive method and apparatus produce a deep convolution neural network (CNN). The method iteratively processes an input directed acyclic graph (DAG) representing an initial CNN, a set of nodes, a set of exogenous nodes, and a resolution based on the CNN. An iteration for a node may include recursively performing the iteration upon each node in a descendant node set to create a descendant DAG, and upon each node in ancestor node sets to create ancestor DAGs, the ancestor node sets being a remainder of nodes in the temporary DAG after removing nodes of the descendent node set. The descendant and ancestor DAGs are merged, and a latent layer is created that includes a latent node for each ancestor node set. Each latent node is set to be a parent of sets of parentless nodes in a combined descendant DAG and ancestors DAGs before returning.
-
-
-
-
-
-