Permuting in a matrix-vector processor

    公开(公告)号:US10956537B2

    公开(公告)日:2021-03-23

    申请号:US16840972

    申请日:2020-04-06

    Applicant: Google LLC

    Abstract: A circuit comprises an input register configured to receive an input vector of elements, a control register configured to receive a control vector of elements, wherein each element of the control vector corresponds to a respective element of the input vector, and wherein each element specifies a permutation of a corresponding element of the input vector, and a permute execution circuit configured to generate an output vector of elements corresponding to a permutation of the input vector. Generating each element of the output vector comprises accessing, at the input register, a particular element of the input vector, accessing, at the control register, a particular element of the control vector corresponding to the particular element of the input vector, and outputting the particular element of the input vector as an element at a particular position of the output vector that is selected based on the particular element of the control vector.

    PERMUTING IN A MATRIX-VECTOR PROCESSOR
    32.
    发明申请

    公开(公告)号:US20200301996A1

    公开(公告)日:2020-09-24

    申请号:US16840972

    申请日:2020-04-06

    Applicant: Google LLC

    Abstract: A circuit comprises an input register configured to receive an input vector of elements, a control register configured to receive a control vector of elements, wherein each element of the control vector corresponds to a respective element of the input vector, and wherein each element specifies a permutation of a corresponding element of the input vector, and a permute execution circuit configured to generate an output vector of elements corresponding to a permutation of the input vector. Generating each element of the output vector comprises accessing, at the input register, a particular element of the input vector, accessing, at the control register, a particular element of the control vector corresponding to the particular element of the input vector, and outputting the particular element of the input vector as an element at a particular position of the output vector that is selected based on the particular element of the control vector.

    NEURAL NETWORK COMPUTE TILE
    33.
    发明申请

    公开(公告)号:US20190213005A1

    公开(公告)日:2019-07-11

    申请号:US16239760

    申请日:2019-01-04

    Applicant: Google LLC

    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.

    Alternative loop limits for accessing data in multi-dimensional tensors

    公开(公告)号:US10248908B2

    公开(公告)日:2019-04-02

    申请号:US15627022

    申请日:2017-06-19

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus for accessing a N-dimensional tensor are described. In some implementations, a method includes, for each of one or more first iterations of a first nested loop, performing iterations of a second nested loop that is nested within the first nested loop until a first loop bound for the second nested loop is reached. A number of iterations of the second nested loop for the one or more first iterations of the first nested loop is limited by the first loop bound in response to the second nested loop having a total number of iterations that exceeds a value of a hardware property of the computing system. After a penultimate iteration of the first nested loop has completed, one or more iterations of the second nested loop are performed for a final iteration of the first nested loop until an alternative loop bound is reached.

    ACCESSING PROLOGUE AND EPILOGUE DATA
    35.
    发明申请

    公开(公告)号:US20190034327A1

    公开(公告)日:2019-01-31

    申请号:US16112307

    申请日:2018-08-24

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including an apparatus for accessing data. In some implementations, an apparatus includes address offset value elements that are each configured to store an address offset value. For each address offset value element, the apparatus can include address computation elements that each store a value used to determine the address offset value. One or more processors are configured to receive a program for performing computations using tensor elements of a tensor. The processor(s) can identify, in the program, a prologue or epilogue loop having a corresponding data array for storing values of the prologue or epilogue loop and populate, for a first address offset value element that corresponds to the prologue or epilogue loop, the address computation elements for the first address offset value element with respective values based at least on a number of iterations of the prologue or epilogue loop.

    Neural network compute tile
    36.
    发明授权

    公开(公告)号:US10175980B2

    公开(公告)日:2019-01-08

    申请号:US15335769

    申请日:2016-10-27

    Applicant: Google LLC

    Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.

    SPECIAL PURPOSE NEURAL NETWORK TRAINING CHIP
    37.
    发明申请

    公开(公告)号:US20180336456A1

    公开(公告)日:2018-11-22

    申请号:US15983056

    申请日:2018-05-17

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus including a special purpose hardware chip for training neural networks are described. The special-purpose hardware chip may include a scalar processor configured to control computational operation of the special-purpose hardware chip. The chip may also include a vector processor configured to have a 2-dimensional array of vector processing units which all execute the same instruction in a single instruction, multiple-data manner and communicate with each other through load and store instructions of the vector processor. The chip may additionally include a matrix multiply unit that is coupled to the vector processor configured to multiply at least one two-dimensional matrix with a second one-dimensional vector or two-dimensional matrix in order to obtain a multiplication result.

    Vector reduction processor
    38.
    发明授权

    公开(公告)号:US10108581B1

    公开(公告)日:2018-10-23

    申请号:US15477791

    申请日:2017-04-03

    Applicant: Google LLC

    Abstract: A vector reduction circuit configured to reduce an input vector of elements comprises a plurality of cells, wherein each of the plurality of cells other than a designated first cell that receives a designated first element of the input vector is configured to receive a particular element of the input vector, receive, from another of the one or more cells, a temporary reduction element, perform a reduction operation using the particular element and the temporary reduction element, and provide, as a new temporary reduction element, a result of performing the reduction operation using the particular element and the temporary reduction element. The vector reduction circuit also comprises an output circuit configured to provide, for output as a reduction of the input vector, a new temporary reduction element corresponding to a result of performing the reduction operation using a last element of the input vector.

    PERMUTING IN A MATRIX-VECTOR PROCESSOR
    39.
    发明申请

    公开(公告)号:US20180253403A1

    公开(公告)日:2018-09-06

    申请号:US15966275

    申请日:2018-04-30

    Applicant: Google LLC

    CPC classification number: G06F17/16 G06F7/76 G06F9/30032 G06F9/30036

    Abstract: A circuit comprises an input register configured to receive an input vector of elements, a control register configured to receive a control vector of elements, wherein each element of the control vector corresponds to a respective element of the input vector, and wherein each element specifies a permutation of a corresponding element of the input vector, and a permute execution circuit configured to generate an output vector of elements corresponding to a permutation of the input vector. Generating each element of the output vector comprises accessing, at the input register, a particular element of the input vector, accessing, at the control register, a particular element of the control vector corresponding to the particular element of the input vector, and outputting the particular element of the input vector as an element at a particular position of the output vector that is selected based on the particular element of the control vector.

    ACCELERATING NEURAL NETWORKS IN HARDWARE USING INTERCONNECTED CROSSBARS

    公开(公告)号:US20250165766A1

    公开(公告)日:2025-05-22

    申请号:US19024066

    申请日:2025-01-16

    Applicant: Google LLC

    Abstract: A computing unit for accelerating a neural network is disclosed. The computing unit may include an input unit that includes a digital-to-analog conversion unit and an analog-to-digital conversion unit that is configured to receive an analog signal from the output of a last interconnected analog crossbar circuit of a plurality of analog crossbar circuits and convert the second analog signal into a digital output vector, and a plurality of interconnected analog crossbar circuits that include the first interconnected analog crossbar circuit and the last interconnected crossbar circuits, wherein a second interconnected analog crossbar circuit of the plurality of interconnected analog crossbar circuits is configured to receive a third analog signal from another interconnected analog crossbar circuit of the plurality of interconnected crossbar circuits and perform one or more operations on the third analog signal based on the matrix weights stored by the crosspoints of the second interconnected analog crossbar.

Patent Agency Ranking