NEURAL NETWORK INSTRUCTION SET ARCHITECTURE

    公开(公告)号:US20180121786A1

    公开(公告)日:2018-05-03

    申请号:US15336216

    申请日:2016-10-27

    Applicant: Google LLC

    CPC classification number: G06N3/04 G06F13/28 G06N3/0454 G06N3/063

    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.

    Multi-partition memory sharing with multiple components

    公开(公告)号:US12013780B2

    公开(公告)日:2024-06-18

    申请号:US17425918

    申请日:2020-08-19

    Applicant: Google LLC

    Abstract: Components on an IC chip may operate faster or provide higher performance relative to power consumption if allowed access to sufficient memory resources. If every component is provided its own memory, however, the chip becomes expensive. In described implementations, memory is shared between two or more components. For example, a processing component can include computational circuitry and a memory coupled thereto. A multi-component cache controller is coupled to the memory. Logic circuitry is coupled to the cache controller and the memory. The logic circuitry selectively separates the memory into multiple memory partitions. A first memory partition can be allocated to the computational circuitry and provide storage to the computational circuitry. A second memory partition can be allocated to the cache controller and provide storage to multiple components. The relative capacities of the memory partitions are adjustable to accommodate fluctuating demands without dedicating individual memories to the components.

    NEURAL NETWORK ACCELERATOR WITH PARAMETERS RESIDENT ON CHIP

    公开(公告)号:US20240078417A1

    公开(公告)日:2024-03-07

    申请号:US18217107

    申请日:2023-06-30

    Applicant: Google LLC

    Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.

    Neural network compute tile
    14.
    发明授权

    公开(公告)号:US11422801B2

    公开(公告)日:2022-08-23

    申请号:US16239760

    申请日:2019-01-04

    Applicant: Google LLC

    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.

    Neural network instruction set architecture

    公开(公告)号:US11379707B2

    公开(公告)日:2022-07-05

    申请号:US15820704

    申请日:2017-11-22

    Applicant: Google LLC

    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.

    LOW-POWER ADDER CIRCUIT
    16.
    发明申请

    公开(公告)号:US20210019361A1

    公开(公告)日:2021-01-21

    申请号:US17063813

    申请日:2020-10-06

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for a circuit configured to add multiple inputs. The circuit includes a first adder section that receives a first input and a second input and adds the inputs to generate a first sum. The circuit also includes a second adder section that receives the first and second inputs and adds the inputs to generate a second sum. An input processor of the circuit receives the first and second inputs, determines whether a relationship between the first and second inputs satisfies a set of conditions, and selects a high-power mode of the adder circuit or a low-power mode of the adder circuit using the determined relationship between the first and second inputs. The high-power mode is selected and the first and second inputs are routed to the second adder section when the relationship satisfies the set of conditions.

    Alternative loop limits for accessing data in multi-dimensional tensors

    公开(公告)号:US10885434B2

    公开(公告)日:2021-01-05

    申请号:US16297091

    申请日:2019-03-08

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus for accessing a N-dimensional tensor are described. In some implementations, a method includes, for each of one or more first iterations of a first nested loop, performing iterations of a second nested loop that is nested within the first nested loop until a first loop bound for the second nested loop is reached. A number of iterations of the second nested loop for the one or more first iterations of the first nested loop is limited by the first loop bound in response to the second nested loop having a total number of iterations that exceeds a value of a hardware property of the computing system. After a penultimate iteration of the first nested loop has completed, one or more iterations of the second nested loop are performed for a final iteration of the first nested loop until an alternative loop bound is reached.

    NEURAL NETWORK ACCELERATOR WITH PARAMETERS RESIDENT ON CHIP

    公开(公告)号:US20200005128A1

    公开(公告)日:2020-01-02

    申请号:US16569607

    申请日:2019-09-12

    Applicant: Google LLC

    Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.

    Matrix processing apparatus
    19.
    发明授权

    公开(公告)号:US10417303B2

    公开(公告)日:2019-09-17

    申请号:US15695144

    申请日:2017-09-05

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.

    HARDWARE DOUBLE BUFFERING USING A SPECIAL PURPOSE COMPUTATIONAL UNIT

    公开(公告)号:US20190012112A1

    公开(公告)日:2019-01-10

    申请号:US15641824

    申请日:2017-07-05

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including an apparatus for transferring data using multiple buffers, including multiple memories and one or more processing units configured to determine buffer memory addresses for a sequence of data elements stored in a first data storage location that are being transferred to a second data storage location. For each group of one or more of the data elements in the sequence, a value of a buffer assignment element that can be switched between multiple values each corresponding to a different one of the memories is identified. A buffer memory address for the group of one or more data elements is determined based on the value of the buffer assignment element. The value of the buffer assignment element is switched prior to determining the buffer memory address for a subsequent group of one or more data elements of the sequence of data elements.

Patent Agency Ranking