-
公开(公告)号:US20230062217A1
公开(公告)日:2023-03-02
申请号:US18046301
申请日:2022-10-13
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: Andrew S. Cassidy , Rathinakumar Appuswamy , John V. Arthur , Pallab Datta , Steven K. Esser , Myron D. Flickner , Jennifer Klamo , Dharmendra S. Modha , Hartmut Penner , Jun Sawada , Brian Taba
Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.
-
公开(公告)号:US11501140B2
公开(公告)日:2022-11-15
申请号:US16012475
申请日:2018-06-19
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: Andrew S. Cassidy , Rathinakumar Appuswamy , John V. Arthur , Pallab Datta , Steven K. Esser , Myron D. Flickner , Jennifer Klamo , Dharmendra S. Modha , Hartmut Penner , Jun Sawada , Brian Taba
Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.
-
公开(公告)号:US20190303741A1
公开(公告)日:2019-10-03
申请号:US15942298
申请日:2018-03-30
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: Rathinakumar Appuswamy , John V. Arthur , Andrew S. Cassidy , Pallab Datta , Steven K. Esser , Myron D. Flickner , Jennifer Klamo , Dharmendra S. Modha , Hartmut Penner , Jun Sawada , Brian Taba
Abstract: Defect resistant designs for location-sensitive neural network processor arrays are provided. In various embodiments, plurality of neural network processor cores are arrayed in a grid. The grid has a plurality of rows and a plurality of columns. A network interconnects at least those of the plurality of neural network processor cores that are adjacent within the grid. The network is adapted to bypass a defective core of the plurality of neural network processor cores by providing a connection between two non-adjacent rows or columns of the grid, and transparently routing messages between the two non-adjacent rows or columns, past the defective core.
-
公开(公告)号:US11663461B2
公开(公告)日:2023-05-30
申请号:US16028158
申请日:2018-07-05
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: Hartmut Penner , Dharmendra S. Modha , John V. Arthur , Andrew S. Cassidy , Rathinakumar Appuswamy , Pallab Datta , Steven K. Esser , Myron D. Flickner , Jennifer Klamo , Jun Sawada , Brian Taba
Abstract: Instruction distribution in an array of neural network cores is provided. In various embodiments, a neural inference chip is initialized with core microcode. The chip comprises a plurality of neural cores. The core microcode is executable by the neural cores to execute a tensor operation of a neural network. The core microcode is distributed to the plurality of neural cores via an on-chip network. The core microcode is executed synchronously by the plurality of neural cores to compute a neural network layer.
-
公开(公告)号:US11238347B2
公开(公告)日:2022-02-01
申请号:US16146632
申请日:2018-09-28
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: Brian Taba , Andrew S. Cassidy , Myron D. Flickner , Pallab Datta , Hartmut Penner , Rathinakumar Appuswamy , Jun Sawada , John V. Arthur , Dharmendra S. Modha , Steven K. Esser , Jennifer Klamo
Abstract: Parallel processing among arrays of physical neural cores is provided. An array of neural cores is adapted to compute, in parallel, an output activation tensor of a neural network layer. A network is operatively connected to each of the neural cores. The output activation tensor is distributed across the neural cores. An input activation tensor is distributed across the neural cores. A weight tensor is distributed across the neural cores. Each neural core's computation comprises multiplying elements of a portion of the input activation tensor at that core with elements of a portion of the weight tensor at that core, and storing the summed products in a partial sum corresponding to an element of the output activation tensor. Each element of the output activation tensor is computed by accumulating all of the partial sums corresponding to that element via the network. The partial sums for each element of the output activation tensor are computed in a sequence of steps whose order is described by tracing a path through the weight tensor that visits every weight tensor element that contributes to any partial sum.
-
公开(公告)号:US20210312305A1
公开(公告)日:2021-10-07
申请号:US16842035
申请日:2020-04-07
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: Jun Sawada , Dharmendra S. Modha , Andrew S. Cassidy , John V. Arthur , Tapan K. Nayak , Carlos O. Otero , Brian Taba , Filipp A. Akopyan , Pallab Datta
Abstract: Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of elements of the memory array. The instruction memory provides at least one instruction to the instruction buffer. The instruction buffer advances the at least one instruction between positions in the instruction buffer. The instruction buffer provides the at least one instruction to at least one of the plurality of elements of the memory array from its associated position in the instruction buffer when the memory of the at least one of the plurality of elements contains data associated with the at least one instruction. Each element of the memory array provides a data block from its memory to its horizontal buffer in response to the arrival of an associated instruction from the instruction buffer. The horizontal buffer of each element of the memory array provides a data block to the horizontal buffer of another of the elements of the memory array or to the at least one neural core.
-
公开(公告)号:US11010662B2
公开(公告)日:2021-05-18
申请号:US16808900
申请日:2020-03-04
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: Rathinakumar Appuswamy , John V. Arthur , Andrew S. Cassidy , Pallab Datta , Steven K. Esser , Myron D. Flickner , Jennifer Klamo , Dharmendra S. Modha , Hartmut Penner , Jun Sawada , Brian Taba
Abstract: Massively parallel neural inference computing elements are provided. A plurality of multipliers is arranged in a plurality of equal-sized groups. Each of the plurality of multipliers is adapted to, in parallel, apply a weight to an input activation to generate an output. A plurality of adders is operatively coupled to one of the groups of multipliers. Each of the plurality of adders is adapted to, in parallel, add the outputs of the multipliers within its associated group to generate a partial sum. A plurality of function blocks is operatively coupled to one of the plurality of adders. Each of the plurality of function blocks is adapted to, in parallel, apply a function to the partial sum of its associated adder to generate an output value.
-
公开(公告)号:US12182687B2
公开(公告)日:2024-12-31
申请号:US16157852
申请日:2018-10-11
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: John V. Arthur , Andrew S. Cassidy , Myron D. Flickner , Pallab Datta , Hartmut Penner , Rathinakumar Appuswamy , Jun Sawada , Dharmendra S. Modha , Steven K. Esser , Brian Taba , Jennifer Klamo
Abstract: Systems for neural network computation are provided. A neural network processor comprises a plurality of neural cores. The neural network processor has one or more processor precisions per activation. The processor is configured to accept data having a processor feature dimension. A transformation circuit is coupled to the neural network processor, and is adapted to: receive an input data tensor having an input precision per channel at one or more features; transform the input data tensor from the input precision to the processor precision; divide the input data into a plurality of blocks, each block conforming to one of the processor feature dimensions; provide each of the plurality of blocks to one of the plurality of neural cores. The neural network processor is adapted to compute, by the plurality of neural cores, output of one or more neural network layers.
-
公开(公告)号:US12056598B2
公开(公告)日:2024-08-06
申请号:US18046301
申请日:2022-10-13
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: Andrew S. Cassidy , Rathinakumar Appuswamy , John V. Arthur , Pallab Datta , Steven K. Esser , Myron D. Flickner , Jennifer Klamo , Dharmendra S. Modha , Hartmut Penner , Jun Sawada , Brian Taba
Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.
-
10.
公开(公告)号:US20200042856A1
公开(公告)日:2020-02-06
申请号:US16051034
申请日:2018-07-31
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: Pallab Datta , Andrew S. Cassidy , Myron D. Flickner , Hartmut Penner , Rathinakumar Appuswamy , Jun Sawada , John V. Arthur , Dharmendra S. Modha , Steven K. Esser , Brian Taba , Jennifer Klamo
Abstract: Mapping of neural network layers to physical neural cores is provided. In various embodiments, a neural network description describing a plurality of neural network layers is read. Each of the plurality of neural network layers has an associated weight tensor, input tensor, and output tensor. A plurality of precedence relationships among the plurality of neural network layers is determined. The weight tensor, input tensor, and output tensor of each of the plurality of neural network layers are mapped onto an array of neural cores.
-
-
-
-
-
-
-
-
-