-
公开(公告)号:US11263512B2
公开(公告)日:2022-03-01
申请号:US15943830
申请日:2018-04-03
Applicant: Hailo Technologies Ltd.
Inventor: Avi Baum , Or Danon , Hadar Zeitlin , Daniel Ciubotariu , Rami Feig
IPC: G06N3/04 , G06F12/02 , G06N3/063 , G06F30/27 , G06F7/501 , G06F7/523 , G06F9/50 , G06F17/10 , G06F5/01 , G06N3/08 , G06F13/16 , G06F9/30 , G06K9/46 , G06K9/62 , G06N3/02 , G06F12/06 , G06N20/00 , G06F30/30
Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs) and incorporating strictly separate control and data planes. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level. Dynamic resource assignment agility is provided which can be adjusted as required depending on resource availability and capacity of the device.
-
公开(公告)号:US20180285719A1
公开(公告)日:2018-10-04
申请号:US15943830
申请日:2018-04-03
Applicant: Hailo Technologies Ltd.
Inventor: Avi Baum , Or Danon , Hadar Zeitlin , Daniel Ciubotariu , Rami Feig
Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs) and incorporating strictly separate control and data planes. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level. Dynamic resource assignment agility is provided which can be adjusted as required depending on resource availability and capacity of the device.
-
3.
公开(公告)号:US11354563B2
公开(公告)日:2022-06-07
申请号:US15943845
申请日:2018-04-03
Applicant: Hailo Technologies Ltd.
Inventor: Avi Baum , Or Danon , Hadar Zeitlin , Daniel Ciubotariu , Rami Feig
IPC: G06F9/50 , G06N3/04 , G06F12/02 , G06N3/063 , G06F12/06 , G06N20/00 , G06F30/30 , G06F30/27 , G06V10/40 , G06F7/501 , G06F7/523 , G06F17/10 , G06F5/01 , G06N3/08 , G06F13/16 , G06F9/30 , G06K9/62 , G06N3/02
Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs) and incorporating configurable and programmable sliding window based memory access. The memory mapping and allocation scheme trades off random and full access in favor of high parallelism and static mapping to a subset of the overall address space. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level. Dynamic resource assignment agility is provided which can be adjusted as required depending on resource availability and capacity of the device.
-
4.
公开(公告)号:US20180285718A1
公开(公告)日:2018-10-04
申请号:US15943800
申请日:2018-04-03
Applicant: Hailo Technologies Ltd.
Inventor: Avi Baum , Or Danon , Hadar Zeitlin , Daniel Ciubotariu , Rami Feig
Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs). The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level. Dynamic resource assignment agility is provided which can be adjusted as required depending on resource availability and capacity of the device.
-
公开(公告)号:US11216717B2
公开(公告)日:2022-01-04
申请号:US15943800
申请日:2018-04-03
Applicant: Hailo Technologies Ltd.
Inventor: Avi Baum , Or Danon , Hadar Zeitlin , Daniel Ciubotariu , Rami Feig
IPC: G06F12/02 , G06F30/27 , G06F30/30 , G06F7/501 , G06F7/523 , G06F9/50 , G06F12/06 , G06F17/10 , G06F5/01 , G06F13/16 , G06F9/30 , G06N20/00 , G06N3/04 , G06N3/08 , G06N3/02 , G06N3/063 , G05B13/02 , G06K9/46 , G06K9/62
Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs). The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level. Dynamic resource assignment agility is provided which can be adjusted as required depending on resource availability and capacity of the device.
-
6.
公开(公告)号:US20180285725A1
公开(公告)日:2018-10-04
申请号:US15943845
申请日:2018-04-03
Applicant: Hailo Technologies Ltd.
Inventor: Avi Baum , Or Danon , Hadar Zeitlin , Daniel Ciubotariu , Rami Feig
Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs) and incorporating configurable and programmable sliding window based memory access. The memory mapping and allocation scheme trades off random and full access in favor of high parallelism and static mapping to a subset of the overall address space. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level. Dynamic resource assignment agility is provided which can be adjusted as required depending on resource availability and capacity of the device.
-
公开(公告)号:US11514291B2
公开(公告)日:2022-11-29
申请号:US15943976
申请日:2018-04-03
Applicant: Hailo Technologies Ltd.
Inventor: Avi Baum , Or Danon , Hadar Zeitlin , Daniel Ciubotariu , Rami Feig
IPC: G06N3/063 , G06N3/04 , G06F12/02 , G06F12/06 , G06N20/00 , G06F30/30 , G06F30/27 , G06V10/40 , G06F7/501 , G06F7/523 , G06F9/50 , G06F17/10 , G06F5/01 , G06N3/08 , G06F13/16 , G06F9/30 , G06K9/62 , G06N3/02
Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs) and incorporating processing circuits having compute and local memory elements. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level. Dynamic resource assignment agility is provided which can be adjusted as required depending on resource availability and capacity of the device.
-
公开(公告)号:US20180285726A1
公开(公告)日:2018-10-04
申请号:US15943872
申请日:2018-04-03
Applicant: Hailo Technologies Ltd.
Inventor: Avi Baum , Or Danon , Hadar Zeitlin , Daniel Ciubotariu , Rami Feig
Abstract: A novel and useful neural network (NN) processing core incorporating inter-device connectivity and adapted to implement artificial neural networks (ANNs). A chip-to-chip interface spreads a given ANN model across multiple devices in a seamless manner. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level. Dynamic resource assignment agility is provided which can be adjusted as required depending on resource availability and capacity of the device.
-
公开(公告)号:US20180285678A1
公开(公告)日:2018-10-04
申请号:US15669933
申请日:2017-08-06
Applicant: Hailo Technologies Ltd.
Inventor: Avi Baum , Or Danon , Mark Grobman , Hadar Zeitlin
CPC classification number: G06F12/0207 , G06F5/01 , G06F7/501 , G06F7/523 , G06F9/30054 , G06F9/5016 , G06F9/5027 , G06F12/0646 , G06F12/0692 , G06F13/1663 , G06F17/10 , G06K9/46 , G06K9/62 , G06N3/02 , G06N3/04 , G06N3/0454 , G06N3/063 , G06N3/08 , G06N3/082 , G06N3/084 , Y02D10/14
Abstract: A novel and useful artificial neural network that incorporates emphasis and focus techniques to extract more information from one or more portions of an input image compared to the rest of the image. The ANN recognizes that valuable information in an input image is typically not distributed throughout the image but rather is concentrated in one or more regions. Rather than implement CNN layers sequentially (i.e. row by row) on the input domain of each layer, the present invention leverages the fact that valuable information is focused in one or more regions of the image where it is desirable to apply more attention and for which it is desired to apply more elaborate evaluation. Precision dilution can be applied to those portions of the input image that are not the center of focus and emphasis. A spatial aware function determines the location(s) of the ears of focus and is applied to the first convolutional layer. Dilution of precision is performed either before and/or after the first convolutional layer thereby significantly reducing computation and power requirements.
-
公开(公告)号:US11675693B2
公开(公告)日:2023-06-13
申请号:US15943872
申请日:2018-04-03
Applicant: Hailo Technologies Ltd.
Inventor: Avi Baum , Or Danon , Hadar Zeitlin , Daniel Ciubotariu , Rami Feig
IPC: G06F12/02 , G06N3/063 , G06F12/06 , G06N20/00 , G06F30/30 , G06F30/27 , G06F18/00 , G06N3/045 , G06F7/501 , G06F7/523 , G06F9/50 , G06F17/10 , G06F5/01 , G06N3/08 , G06F13/16 , G06N3/04 , G06F9/30 , G06N3/084 , G06N3/02 , G06N3/082
CPC classification number: G06F12/0207 , G06F5/01 , G06F7/501 , G06F7/523 , G06F9/30054 , G06F9/5016 , G06F9/5027 , G06F12/02 , G06F12/0646 , G06F12/0692 , G06F13/1663 , G06F17/10 , G06F18/00 , G06F30/27 , G06F30/30 , G06N3/02 , G06N3/04 , G06N3/045 , G06N3/063 , G06N3/08 , G06N3/084 , G06N20/00 , G06N3/082 , Y02D10/00
Abstract: A novel and useful neural network (NN) processing core incorporating inter-device connectivity and adapted to implement artificial neural networks (ANNs). A chip-to-chip interface spreads a given ANN model across multiple devices in a seamless manner. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level. Dynamic resource assignment agility is provided which can be adjusted as required depending on resource availability and capacity of the device.
-
-
-
-
-
-
-
-
-