-
公开(公告)号:US20230062217A1
公开(公告)日:2023-03-02
申请号:US18046301
申请日:2022-10-13
发明人: Andrew S. Cassidy , Rathinakumar Appuswamy , John V. Arthur , Pallab Datta , Steven K. Esser , Myron D. Flickner , Jennifer Klamo , Dharmendra S. Modha , Hartmut Penner , Jun Sawada , Brian Taba
摘要: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.
-
公开(公告)号:US11501140B2
公开(公告)日:2022-11-15
申请号:US16012475
申请日:2018-06-19
发明人: Andrew S. Cassidy , Rathinakumar Appuswamy , John V. Arthur , Pallab Datta , Steven K. Esser , Myron D. Flickner , Jennifer Klamo , Dharmendra S. Modha , Hartmut Penner , Jun Sawada , Brian Taba
摘要: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.
-
公开(公告)号:US20190303741A1
公开(公告)日:2019-10-03
申请号:US15942298
申请日:2018-03-30
发明人: Rathinakumar Appuswamy , John V. Arthur , Andrew S. Cassidy , Pallab Datta , Steven K. Esser , Myron D. Flickner , Jennifer Klamo , Dharmendra S. Modha , Hartmut Penner , Jun Sawada , Brian Taba
摘要: Defect resistant designs for location-sensitive neural network processor arrays are provided. In various embodiments, plurality of neural network processor cores are arrayed in a grid. The grid has a plurality of rows and a plurality of columns. A network interconnects at least those of the plurality of neural network processor cores that are adjacent within the grid. The network is adapted to bypass a defective core of the plurality of neural network processor cores by providing a connection between two non-adjacent rows or columns of the grid, and transparently routing messages between the two non-adjacent rows or columns, past the defective core.
-
公开(公告)号:US12056598B2
公开(公告)日:2024-08-06
申请号:US18046301
申请日:2022-10-13
发明人: Andrew S. Cassidy , Rathinakumar Appuswamy , John V. Arthur , Pallab Datta , Steven K. Esser , Myron D. Flickner , Jennifer Klamo , Dharmendra S. Modha , Hartmut Penner , Jun Sawada , Brian Taba
摘要: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.
-
5.
公开(公告)号:US20200042856A1
公开(公告)日:2020-02-06
申请号:US16051034
申请日:2018-07-31
发明人: Pallab Datta , Andrew S. Cassidy , Myron D. Flickner , Hartmut Penner , Rathinakumar Appuswamy , Jun Sawada , John V. Arthur , Dharmendra S. Modha , Steven K. Esser , Brian Taba , Jennifer Klamo
摘要: Mapping of neural network layers to physical neural cores is provided. In various embodiments, a neural network description describing a plurality of neural network layers is read. Each of the plurality of neural network layers has an associated weight tensor, input tensor, and output tensor. A plurality of precedence relationships among the plurality of neural network layers is determined. The weight tensor, input tensor, and output tensor of each of the plurality of neural network layers are mapped onto an array of neural cores.
-
公开(公告)号:US20200019836A1
公开(公告)日:2020-01-16
申请号:US16033926
申请日:2018-07-12
发明人: John V. Arthur , Andrew S. Cassidy , Myron D. Flickner , Pallab Datta , Hartmut Penner , Rathinakumar Appuswamy , Jun Sawada , Dharmendra S. Modha , Steven K. Esser , Brian Taba , Jennifer Klamo
摘要: Networks of distributed neural cores are provided with hierarchical parallelism. In various embodiments, a plurality of neural cores is provided. Each of the plurality of neural cores comprises a plurality of vector compute units configured to operate in parallel. Each of the plurality of neural cores is configured to compute in parallel output activations by applying its plurality of vector compute units to input activations. Each of the plurality of neural cores is assigned a subset of output activations of a layer of a neural network for computation. Upon receipt of a subset of input activations of the layer of the neural network, each of the plurality of neural cores computes a partial sum for each of its assigned output activations, and computes its assigned output activations from at least the computed partial sums.
-
公开(公告)号:US20200012929A1
公开(公告)日:2020-01-09
申请号:US16028158
申请日:2018-07-05
发明人: Hartmut Penner , Dharmendra S. Modha , John V. Arthur , Andrew S. Cassidy , Rathinakumar Appuswamy , Pallab Datta , Steven K. Esser , Myron D. Flickner , Jennifer Klamo , Jun Sawada , Brian Taba
摘要: Instruction distribution in an array of neural network cores is provided. In various embodiments, a neural inference chip is initialized with core microcode. The chip comprises a plurality of neural cores. The core microcode is executable by the neural cores to execute a tensor operation of a neural network. The core microcode is distributed to the plurality of neural cores via an on-chip network. The core microcode is executed synchronously by the plurality of neural cores to compute a neural network layer.
-
公开(公告)号:US11663461B2
公开(公告)日:2023-05-30
申请号:US16028158
申请日:2018-07-05
发明人: Hartmut Penner , Dharmendra S. Modha , John V. Arthur , Andrew S. Cassidy , Rathinakumar Appuswamy , Pallab Datta , Steven K. Esser , Myron D. Flickner , Jennifer Klamo , Jun Sawada , Brian Taba
摘要: Instruction distribution in an array of neural network cores is provided. In various embodiments, a neural inference chip is initialized with core microcode. The chip comprises a plurality of neural cores. The core microcode is executable by the neural cores to execute a tensor operation of a neural network. The core microcode is distributed to the plurality of neural cores via an on-chip network. The core microcode is executed synchronously by the plurality of neural cores to compute a neural network layer.
-
公开(公告)号:US11238347B2
公开(公告)日:2022-02-01
申请号:US16146632
申请日:2018-09-28
发明人: Brian Taba , Andrew S. Cassidy , Myron D. Flickner , Pallab Datta , Hartmut Penner , Rathinakumar Appuswamy , Jun Sawada , John V. Arthur , Dharmendra S. Modha , Steven K. Esser , Jennifer Klamo
摘要: Parallel processing among arrays of physical neural cores is provided. An array of neural cores is adapted to compute, in parallel, an output activation tensor of a neural network layer. A network is operatively connected to each of the neural cores. The output activation tensor is distributed across the neural cores. An input activation tensor is distributed across the neural cores. A weight tensor is distributed across the neural cores. Each neural core's computation comprises multiplying elements of a portion of the input activation tensor at that core with elements of a portion of the weight tensor at that core, and storing the summed products in a partial sum corresponding to an element of the output activation tensor. Each element of the output activation tensor is computed by accumulating all of the partial sums corresponding to that element via the network. The partial sums for each element of the output activation tensor are computed in a sequence of steps whose order is described by tracing a path through the weight tensor that visits every weight tensor element that contributes to any partial sum.
-
公开(公告)号:US20210312305A1
公开(公告)日:2021-10-07
申请号:US16842035
申请日:2020-04-07
发明人: Jun Sawada , Dharmendra S. Modha , Andrew S. Cassidy , John V. Arthur , Tapan K. Nayak , Carlos O. Otero , Brian Taba , Filipp A. Akopyan , Pallab Datta
摘要: Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of elements of the memory array. The instruction memory provides at least one instruction to the instruction buffer. The instruction buffer advances the at least one instruction between positions in the instruction buffer. The instruction buffer provides the at least one instruction to at least one of the plurality of elements of the memory array from its associated position in the instruction buffer when the memory of the at least one of the plurality of elements contains data associated with the at least one instruction. Each element of the memory array provides a data block from its memory to its horizontal buffer in response to the arrival of an associated instruction from the instruction buffer. The horizontal buffer of each element of the memory array provides a data block to the horizontal buffer of another of the elements of the memory array or to the at least one neural core.
-
-
-
-
-
-
-
-
-