-
公开(公告)号:US10719575B2
公开(公告)日:2020-07-21
申请号:US16571749
申请日:2019-09-16
Applicant: Google LLC
Inventor: Ravi Narayanaswami , Rahul Nagarajan , Dong Hyuk Woo , Christopher Daniel Leary
Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
-
公开(公告)号:US20200183612A1
公开(公告)日:2020-06-11
申请号:US16700385
申请日:2019-12-02
Applicant: Google LLC
Inventor: Olivier Temam , Harshit Khaitan , Ravi Narayanaswami , Dong Hyuk Woo
Abstract: Methods, systems, and apparatus, including an apparatus for transferring data using multiple buffers, including multiple memories and one or more processing units configured to determine buffer memory addresses for a sequence of data elements stored in a first data storage location that are being transferred to a second data storage location. For each group of one or more of the data elements in the sequence, a value of a buffer assignment element that can be switched between multiple values each corresponding to a different one of the memories is identified. A buffer memory address for the group of one or more data elements is determined based on the value of the buffer assignment element. The value of the buffer assignment element is switched prior to determining the buffer memory address for a subsequent group of one or more data elements of the sequence of data elements.
-
公开(公告)号:US10534607B2
公开(公告)日:2020-01-14
申请号:US15903991
申请日:2018-02-23
Applicant: Google LLC
Inventor: Olivier Temam , Harshit Khaitan , Ravi Narayanaswami , Dong Hyuk Woo
IPC: G06F9/302 , G06F9/355 , G06F17/16 , G06N3/00 , G06F9/30 , G06F9/34 , G06F9/32 , G06N3/04 , G06N3/063
Abstract: Methods, systems, and apparatus, including an apparatus for accessing a N-dimensional tensor, the apparatus including, for each dimension of the N-dimensional tensor, a partial address offset value element that stores a partial address offset value for the dimension based at least on an initial value for the dimension, a step value for the dimension, and a number of iterations of a loop for the dimension. The apparatus includes a hardware adder and a processor. The processor obtains an instruction to access a particular element of the N-dimensional tensor. The N-dimensional tensor has multiple elements arranged across each of the N dimensions, where N is an integer that is equal to or greater than one. The processor determines, using the partial address offset value elements and the hardware adder, an address of the particular element and outputs data indicating the determined address for accessing the particular element of the N-dimensional tensor.
-
公开(公告)号:US20200012608A1
公开(公告)日:2020-01-09
申请号:US16514562
申请日:2019-07-17
Applicant: Google LLC
Inventor: Dong Hyuk Woo , Ravi Narayanaswami
Abstract: A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.
-
公开(公告)号:US20190138243A1
公开(公告)日:2019-05-09
申请号:US16240459
申请日:2019-01-04
Applicant: Google LLC
Inventor: Olivier Temam , Harshit Khaitan , Ravi Narayanaswami , Dong Hyuk Woo
Abstract: Methods, systems, and apparatus, including an apparatus for transferring data using multiple buffers, including multiple memories and one or more processing units configured to determine buffer memory addresses for a sequence of data elements stored in a first data storage location that are being transferred to a second data storage location. For each group of one or more of the data elements in the sequence, a value of a buffer assignment element that can be switched between multiple values each corresponding to a different one of the memories is identified. A buffer memory address for the group of one or more data elements is determined based on the value of the buffer assignment element. The value of the buffer assignment element is switched prior to determining the buffer memory address for a subsequent group of one or more data elements of the sequence of data elements.
-
公开(公告)号:US20190050717A1
公开(公告)日:2019-02-14
申请号:US16059686
申请日:2018-08-09
Applicant: Google LLC
Inventor: Olivier Temam , Harshit Khaitan , Ravi Narayanaswami , Dong Hyuk Woo
Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.
-
公开(公告)号:US10175912B1
公开(公告)日:2019-01-08
申请号:US15641824
申请日:2017-07-05
Applicant: Google LLC
Inventor: Olivier Temam , Harshit Khaitan , Ravi Narayanaswami , Dong Hyuk Woo
Abstract: Methods, systems, and apparatus, including an apparatus for transferring data using multiple buffers, including multiple memories and one or more processing units configured to determine buffer memory addresses for a sequence of data elements stored in a first data storage location that are being transferred to a second data storage location. For each group of one or more of the data elements in the sequence, a value of a buffer assignment element that can be switched between multiple values each corresponding to a different one of the memories is identified. A buffer memory address for the group of one or more data elements is determined based on the value of the buffer assignment element. The value of the buffer assignment element is switched prior to determining the buffer memory address for a subsequent group of one or more data elements of the sequence of data elements.
-
公开(公告)号:US20180365553A1
公开(公告)日:2018-12-20
申请号:US15927367
申请日:2018-03-21
Applicant: Google LLC
Inventor: Andreas Georg Nowatzyk , Olivier Temam , Ravi Narayanaswami , Uday Kumar Dasari
Abstract: A three dimensional neural network accelerator that includes a first neural network accelerator tile that includes a first transmission coil, and a second neural network accelerator tile that includes a second transmission coil, wherein the first neural network accelerator tile is adjacent to and aligned vertically with the second neural network accelerator tile, and wherein the first transmission coil is configured to wirelessly communicate with the second transmission coil via inductive coupling.
-
公开(公告)号:US20180197068A1
公开(公告)日:2018-07-12
申请号:US15820704
申请日:2017-11-22
Applicant: Google LLC
Inventor: Ravi Narayanaswami , Dong Hyuk Woo , Olivier Temam , Harshit Khaitan
CPC classification number: G06N3/04 , G06F13/28 , G06N3/0454 , G06N3/063
Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
-
公开(公告)号:US09946539B1
公开(公告)日:2018-04-17
申请号:US15603061
申请日:2017-05-23
Applicant: Google LLC
Inventor: Olivier Temam , Harshit Khaitan , Ravi Narayanaswami , Dong Hyuk Woo
CPC classification number: G06F17/16 , G06F9/30065 , G06F9/30101 , G06F9/325 , G06F9/3455 , G06F2212/454 , G06N3/00
Abstract: Methods, systems, and apparatus, including an apparatus for accessing a N-dimensional tensor, the apparatus including, for each dimension of the N-dimensional tensor, a partial address offset value element that stores a partial address offset value for the dimension based at least on an initial value for the dimension, a step value for the dimension, and a number of iterations of a loop for the dimension. The apparatus includes a hardware adder and a processor. The processor obtains an instruction to access a particular element of the N-dimensional tensor. The N-dimensional tensor has multiple elements arranged across each of the N dimensions, where N is an integer that is equal to or greater than one. The processor determines, using the partial address offset value elements and the hardware adder, an address of the particular element and outputs data indicating the determined address for accessing the particular element of the N-dimensional tensor.
-
-
-
-
-
-
-
-
-