-
公开(公告)号:US20180096242A1
公开(公告)日:2018-04-05
申请号:US15282705
申请日:2016-09-30
发明人: Dharmendra Modha
摘要: A scalable stream synaptic supercomputer for extreme throughput neural networks is provided. The firing state of a plurality of neurons of a first neurosynaptic core is determined substantially in parallel. The firing state of the plurality of neurons is delivered to at least one additional neurosynaptic core substantially in parallel.
-
公开(公告)号:US10338629B2
公开(公告)日:2019-07-02
申请号:US15273036
申请日:2016-09-22
发明人: Arnon Amir , Pallab Datta , Dharmendra Modha
摘要: Reduction in the number of neurons and axons in a neurosynaptic network while maintaining its functionality is provided. A neural network description describing a neural network is read. One or more functional unit of the neural network is identified. The one or more functional unit of the neural network is optimized. An optimized neural network description is written based on the optimized functional unit.
-
公开(公告)号:US20180082182A1
公开(公告)日:2018-03-22
申请号:US15273036
申请日:2016-09-22
发明人: Arnon Amir , Pallab Datta , Dharmendra Modha
摘要: Reduction in the number of neurons and axons in a neurosynaptic network while maintaining its functionality is provided. A neural network description describing a neural network is read. One or more functional unit of the neural network is identified. The one or more functional unit of the neural network is optimized. An optimized neural network description is written based on the optimized functional unit.
-
公开(公告)号:US11270193B2
公开(公告)日:2022-03-08
申请号:US15282705
申请日:2016-09-30
发明人: Dharmendra Modha
摘要: A scalable stream synaptic supercomputer for extreme throughput neural networks is provided. The firing state of a plurality of neurons of a first neurosynaptic core is determined substantially in parallel. The firing state of the plurality of neurons is delivered to at least one additional neurosynaptic core substantially in parallel.
-
公开(公告)号:US20180107918A1
公开(公告)日:2018-04-19
申请号:US15294303
申请日:2016-10-14
发明人: Arnon Amir , Pallab Datta , Nimrod Megiddo , Dharmendra Modha
摘要: Core utilization optimization by dividing computational blocks across neurosynaptic cores is provided. In some embodiments, a neural network description describing a neural network is read. The neural network comprises a plurality of functional units on a plurality of cores. A functional unit is selected from the plurality of functional units. The functional unit is divided into a plurality of subunits. The plurality of subunits are connected to the neural network in place of the functional unit. The plurality of functional units and the plurality of subunits are reallocated between the plurality of cores. One or more unused cores are removed from the plurality of cores. An optimized neural network description is written based on the reallocation.
-
-
-
-