-
公开(公告)号:US11521085B2
公开(公告)日:2022-12-06
申请号:US16842035
申请日:2020-04-07
发明人: Jun Sawada , Dharmendra S. Modha , Andrew S. Cassidy , John V. Arthur , Tapan K. Nayak , Carlos O. Otero , Brian Taba , Filipp A. Akopyan , Pallab Datta
摘要: Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of elements of the memory array. The instruction memory provides at least one instruction to the instruction buffer. The instruction buffer advances the at least one instruction between positions in the instruction buffer. The instruction buffer provides the at least one instruction to at least one of the plurality of elements of the memory array from its associated position in the instruction buffer when the memory of the at least one of the plurality of elements contains data associated with the at least one instruction. Each element of the memory array provides a data block from its memory to its horizontal buffer in response to the arrival of an associated instruction from the instruction buffer. The horizontal buffer of each element of the memory array provides a data block to the horizontal buffer of another of the elements of the memory array or to the at least one neural core.
-
公开(公告)号:US20220129742A1
公开(公告)日:2022-04-28
申请号:US17077709
申请日:2020-10-22
发明人: Alexander Andreopoulos , Dharmendra S. Modha , Carmelo Di Nolfo , Myron D. Flickner , Andrew Stephen Cassidy , Brian Seisho Taba , Pallab Datta , Rathinakumar Appuswamy , Jun Sawada
摘要: Simulation and validation of neural network systems is provided. In various embodiments, a description of an artificial neural network is read. A directed graph is constructed comprising a plurality of edges and a plurality of nodes, each of the plurality of edges corresponding to a queue and each of the plurality of nodes corresponding to a computing function of the neural network system. A graph state is updated over a plurality of time steps according to the description of the neural network, the graph state being defined by the contents of each of the plurality of queues. Each of a plurality of assertions is tested at each of the plurality of time steps, each of the plurality of assertions being a function of a subset of the graph state. Invalidity of the neural network system is indicated for each violation of one of the plurality of assertions.
-
公开(公告)号:US11301757B2
公开(公告)日:2022-04-12
申请号:US16696968
申请日:2019-11-26
发明人: Charles J. Alpert , Pallab Datta , Myron D. Flickner , Zhou Li , Dharmendra S. Modha , Gi-Joon Nam
IPC分类号: G06N3/10
摘要: Embodiments of the present invention relate to providing fault-tolerant power minimization in a multi-core neurosynaptic network. In one embodiment of the present invention, a method of and computer program product for fault-tolerant power-driven synthesis is provided. Power consumption of a neurosynaptic network is modeled as wire length. The neurosynaptic network comprises a plurality of neurosynaptic cores connected by a plurality of routers. At least one faulty core of the plurality of neurosynaptic cores is located. A placement blockage is modeled at the location of the at least one faulty core. A placement of the neurosynaptic cores is determined by minimizing the wire length.
-
公开(公告)号:US11295203B2
公开(公告)日:2022-04-05
申请号:US15220578
申请日:2016-07-27
摘要: Neuron placement in a neuromorphic system to minimize cumulative delivery delay is provided. In some embodiments, a neural network description describing a plurality of neurons is read. A relative delivery delay associated with each of the plurality of neurons is determined. An ordering of the plurality of neurons is determined to optimize cumulative delivery delay over the plurality of neurons. An optimized neural network description based on the ordering of the plurality of neurons is written.
-
公开(公告)号:US11205125B2
公开(公告)日:2021-12-21
申请号:US16024016
申请日:2018-06-29
发明人: Pallab Datta , Dharmendra S. Modha
摘要: Mapping of logical neural cores to physical neural cores is provided. In various embodiments, a neural network description describing a plurality of logical cores is read. A plurality of precedence relationships is determined among the plurality of logical cores. Based on the plurality of precedence relationships, a directed acyclic graph among the plurality of logical cores is generated. By breadth first search of the directed acyclic graph, a schedule is generated. The schedule maps each of the plurality of logical cores to one of a plurality of physical cores at one of a plurality of time slices. Execution of the schedule is simulated.
-
公开(公告)号:US11157795B2
公开(公告)日:2021-10-26
申请号:US15457658
申请日:2017-03-13
摘要: Graph partitioning and placement for multi-chip neurosynaptic networks. According to various embodiments, a neural network description is read. The neural network description describes a plurality of neurons. The plurality of neurons has a mapping from an input domain of the neural network. The plurality of neurons is labeled based on the mapping from the input domain. The plurality of neurons is grouped into a plurality of groups according to the labeling. Each of the plurality of groups is continuous within the input domain. Each of the plurality of groups is assigned to at least one neurosynaptic core.
-
公开(公告)号:US20210174176A1
公开(公告)日:2021-06-10
申请号:US16705565
申请日:2019-12-06
发明人: Andrew S. Cassidy , Rathinakumar Appuswamy , John V. Arthur , Pallab Datta , Steve Esser , Myron D. Flickner , Jeffrey McKinstry , Dharmendra S. Modha , Jun Sawada , Brian Taba
摘要: Neural inference chips are provided. A neural core of the neural inference chip comprises a vector-matrix multiplier; a vector processor; and an activation unit operatively coupled to the vector processor. The vector-matrix multiplier, vector processor, and/or activation unit is adapted to operate at variable precision.
-
88.
公开(公告)号:US20210110245A1
公开(公告)日:2021-04-15
申请号:US16653366
申请日:2019-10-15
发明人: Jun Sawada , Filipp A. Akopyan , Rathinakumar Appuswamy , John V. Arthur , Andrew S. Cassidy , Pallab Datta , Steven K. Esser , Myron D. Flickner , Dharmendra S. Modha , Tapan K. Nayak , Carlos O. Otero
摘要: Neural inference chips for computing neural activations are provided. In various embodiments, the neural inference chip is adapted to: receive an input activation tensor comprising a plurality of input activations; receive a weight tensor comprising a plurality of weights; Booth recode each of the plurality of weights into a plurality of Booth-coded weights, each Booth coded value having an order; multiply the input activation tensor by the Booth coded weights, yielding a plurality of results for each input activation, each of the plurality of results corresponding to the orders of the Booth-coded weights; for each order of the Booth-coded weights, sum the corresponding results, yielding a plurality of partial sums, one for each order; and compute a neural activation from a sum of the plurality of partial sums.
-
公开(公告)号:US10834024B2
公开(公告)日:2020-11-10
申请号:US16147198
申请日:2018-09-28
发明人: Simon J. Hollis , Hartmut Penner , Andrew S. Cassidy , Jun Sawada , Pallab Datta
IPC分类号: G06F15/16 , H04L12/931 , G06N3/02 , H04L12/18
摘要: According to one embodiment, a computer program product for performing selective multicast delivery includes a computer readable storage medium having program instructions embodied therewith, wherein the computer readable storage medium is not a transitory signal per se, and where the program instructions are executable by a selector of an intelligent processing unit (IPU) to cause the selector to perform a method comprising identifying, by the selector, an address header appended to an instance of data, comparing, by the selector, address data in the address header to identifier data stored at the selector, and conditionally delivering, by the selector, the instance of data, based on the comparing.
-
公开(公告)号:US10832125B2
公开(公告)日:2020-11-10
申请号:US15908415
申请日:2018-02-28
发明人: Arnon Amir , Rathinakumar Appuswamy , Pallab Datta , Myron D. Flickner , Paul A. Merolla , Dharmendra S. Modha , Benjamin G. Shaw
摘要: One embodiment of the invention provides a system for mapping a neural network onto a neurosynaptic substrate. The system comprises a metadata analysis unit for analyzing metadata information associated with one or more portions of an adjacency matrix representation of the neural network, and a mapping unit for mapping the one or more portions of the matrix representation onto the neurosynaptic substrate based on the metadata information.
-
-
-
-
-
-
-
-
-