Yield improvements for three-dimensionally stacked neural network accelerators

    公开(公告)号:US12292473B2

    公开(公告)日:2025-05-06

    申请号:US18527902

    申请日:2023-12-04

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for three-dimensionally stacked neural network accelerators. In one aspect, a method includes obtaining data specifying that a tile from a plurality of tiles in a three-dimensionally stacked neural network accelerator is a faulty tile. The three-dimensionally stacked neural network accelerator includes a plurality of neural network dies, each neural network die including a respective plurality of tiles, each tile has input and output connections. The three-dimensionally stacked neural network accelerator is configured to process inputs by routing the input through each of the plurality of tiles according to a dataflow configuration and modifying the dataflow configuration to route an output of a tile before the faulty tile in the dataflow configuration to an input connection of a tile that is positioned above or below the faulty tile on a different neural network die than the faulty tile.

    NEURAL NETWORK ACCELERATOR WITH PARAMETERS RESIDENT ON CHIP

    公开(公告)号:US20230162015A1

    公开(公告)日:2023-05-25

    申请号:US17985061

    申请日:2022-11-10

    Applicant: Google LLC

    Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.

    NEURAL NETWORK COMPUTE TILE
    4.
    发明申请

    公开(公告)号:US20230004386A1

    公开(公告)日:2023-01-05

    申请号:US17892807

    申请日:2022-08-22

    Applicant: Google LLC

    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.

    Neural network crossbar stack
    5.
    发明授权

    公开(公告)号:US11341402B2

    公开(公告)日:2022-05-24

    申请号:US16168135

    申请日:2018-10-23

    Applicant: Google LLC

    Abstract: A circuit for performing neural network computations for a neural network is described. The circuit includes plurality of neural network layers each including a crossbar arrays. The plurality of crossbar arrays are formed in a common substrate in a stacked configuration. Each crossbar array includes a set of crosspoint devices. A respective electrical property of each of the crosspoint devices is adjustable to represent a weight value that is stored for each respective crosspoint device. A processing unit is configured to adjust the respective electrical properties of each of the crosspoint devices by pre-loading each of the crosspoint devices with a tuning signal. A value of the turning signal for each crosspoint device is a function of the weight value represented by each respective crosspoint device.

    Vector reduction processor
    7.
    发明授权

    公开(公告)号:US10706007B2

    公开(公告)日:2020-07-07

    申请号:US16129663

    申请日:2018-09-12

    Applicant: Google LLC

    Abstract: A vector reduction circuit configured to reduce an input vector of elements comprises a plurality of cells, wherein each of the plurality of cells other than a designated first cell that receives a designated first element of the input vector is configured to receive a particular element of the input vector, receive, from another of the one or more cells, a temporary reduction element, perform a reduction operation using the particular element and the temporary reduction element, and provide, as a new temporary reduction element, a result of performing the reduction operation using the particular element and the temporary reduction element. The vector reduction circuit also comprises an output circuit configured to provide, for output as a reduction of the input vector, a new temporary reduction element corresponding to a result of performing the reduction operation using a last element of the input vector.

    ACCESSING DATA IN MULTI-DIMENSIONAL TENSORS USING ADDERS

    公开(公告)号:US20180341479A1

    公开(公告)日:2018-11-29

    申请号:US15903991

    申请日:2018-02-23

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including an apparatus for accessing a N-dimensional tensor, the apparatus including, for each dimension of the N-dimensional tensor, a partial address offset value element that stores a partial address offset value for the dimension based at least on an initial value for the dimension, a step value for the dimension, and a number of iterations of a loop for the dimension. The apparatus includes a hardware adder and a processor. The processor obtains an instruction to access a particular element of the N-dimensional tensor. The N-dimensional tensor has multiple elements arranged across each of the N dimensions, where N is an integer that is equal to or greater than one. The processor determines, using the partial address offset value elements and the hardware adder, an address of the particular element and outputs data indicating the determined address for accessing the particular element of the N-dimensional tensor.

    Neural network crossbar stack
    10.
    发明授权

    公开(公告)号:US10127494B1

    公开(公告)日:2018-11-13

    申请号:US15667230

    申请日:2017-08-02

    Applicant: GOOGLE LLC

    Abstract: A circuit for performing neural network computations for a neural network is described. The circuit includes plurality of neural network layers each including a crossbar arrays. The plurality of crossbar arrays are formed in a common substrate in a stacked configuration. Each crossbar array includes a set of crosspoint devices. A respective electrical property of each of the crosspoint devices is adjustable to represent a weight value that is stored for each respective crosspoint device. A processing unit is configured to adjust the respective electrical properties of each of the crosspoint devices by pre-loading each of the crosspoint devices with a tuning signal. A value of the turning signal for each crosspoint device is a function of the weight value represented by each respective crosspoint device.

Patent Agency Ranking