Neural network instruction set architecture

    公开(公告)号:US12061968B2

    公开(公告)日:2024-08-13

    申请号:US17845291

    申请日:2022-06-21

    Applicant: Google LLC

    CPC classification number: G06N3/04 G06F13/28 G06N3/045 G06N3/063

    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.

    NEURAL NETWORK COMPUTE TILE
    53.
    发明公开

    公开(公告)号:US20240231819A1

    公开(公告)日:2024-07-11

    申请号:US18505743

    申请日:2023-11-09

    Applicant: Google LLC

    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.

    Neural network accelerator with parameters resident on chip

    公开(公告)号:US11501144B2

    公开(公告)日:2022-11-15

    申请号:US16569607

    申请日:2019-09-12

    Applicant: Google LLC

    Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.

    NEURAL NETWORK INSTRUCTION SET ARCHITECTURE

    公开(公告)号:US20220318594A1

    公开(公告)日:2022-10-06

    申请号:US17845291

    申请日:2022-06-21

    Applicant: Google LLC

    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.

    Apparatus and mechanism for processing neural network tasks using a single chip package with multiple identical dies

    公开(公告)号:US10936942B2

    公开(公告)日:2021-03-02

    申请号:US15819753

    申请日:2017-11-21

    Applicant: Google LLC

    Abstract: Apparatus and methods for processing neural network models are provided. The apparatus can comprise a plurality of identical artificial intelligence processing dies. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies can include at least one inter-die input block and at least one inter-die output block. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies is communicatively coupled to another artificial intelligence processing die among the plurality of identical artificial intelligence processing dies by way of one or more communication paths from the at least one inter-die output block of the artificial intelligence processing die to the at least one inter-die input block of the artificial intelligence processing die. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies corresponds to at least one layer of a neural network.

    MATRIX PROCESSING APPARATUS
    58.
    发明申请

    公开(公告)号:US20210034697A1

    公开(公告)日:2021-02-04

    申请号:US16928242

    申请日:2020-07-14

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.

    Low-power adder circuit
    59.
    发明授权

    公开(公告)号:US10824692B2

    公开(公告)日:2020-11-03

    申请号:US16159450

    申请日:2018-10-12

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for a circuit configured to add multiple inputs. The circuit includes a first adder section that receives a first input and a second input and adds the inputs to generate a first sum. The circuit also includes a second adder section that receives the first and second inputs and adds the inputs to generate a second sum. An input processor of the circuit receives the first and second inputs, determines whether a relationship between the first and second inputs satisfies a set of conditions, and selects a high-power mode of the adder circuit or a low-power mode of the adder circuit using the determined relationship between the first and second inputs. The high-power mode is selected and the first and second inputs are routed to the second adder section when the relationship satisfies the set of conditions.

    Accessing prologue and epilogue data

    公开(公告)号:US10802956B2

    公开(公告)日:2020-10-13

    申请号:US16112307

    申请日:2018-08-24

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including an apparatus for accessing data. In some implementations, an apparatus includes address offset value elements that are each configured to store an address offset value. For each address offset value element, the apparatus can include address computation elements that each store a value used to determine the address offset value. One or more processors are configured to receive a program for performing computations using tensor elements of a tensor. The processor(s) can identify, in the program, a prologue or epilogue loop having a corresponding data array for storing values of the prologue or epilogue loop and populate, for a first address offset value element that corresponds to the prologue or epilogue loop, the address computation elements for the first address offset value element with respective values based at least on a number of iterations of the prologue or epilogue loop.

Patent Agency Ranking