-
公开(公告)号:US20190213005A1
公开(公告)日:2019-07-11
申请号:US16239760
申请日:2019-01-04
Applicant: Google LLC
Inventor: Olivier Temam , Ravi Narayanaswami , Harshit Khaitan , Dong Hyuk Woo
CPC classification number: G06F9/3001 , G06F9/30036 , G06F9/30065 , G06F13/28 , G06N3/04 , G06N3/0454 , G06N3/063
Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
-
公开(公告)号:US10248908B2
公开(公告)日:2019-04-02
申请号:US15627022
申请日:2017-06-19
Applicant: Google LLC
Inventor: Olivier Temam , Harshit Khaitan , Ravi Narayanaswami , Dong Hyuk Woo
Abstract: Methods, systems, and apparatus for accessing a N-dimensional tensor are described. In some implementations, a method includes, for each of one or more first iterations of a first nested loop, performing iterations of a second nested loop that is nested within the first nested loop until a first loop bound for the second nested loop is reached. A number of iterations of the second nested loop for the one or more first iterations of the first nested loop is limited by the first loop bound in response to the second nested loop having a total number of iterations that exceeds a value of a hardware property of the computing system. After a penultimate iteration of the first nested loop has completed, one or more iterations of the second nested loop are performed for a final iteration of the first nested loop until an alternative loop bound is reached.
-
公开(公告)号:US20190034327A1
公开(公告)日:2019-01-31
申请号:US16112307
申请日:2018-08-24
Applicant: Google LLC
Inventor: Olivier Temam , Harshit Khaitan , Ravi Narayanaswami , Dong Hyuk Woo
IPC: G06F12/02 , G06N99/00 , G06N5/02 , G06F12/1009
Abstract: Methods, systems, and apparatus, including an apparatus for accessing data. In some implementations, an apparatus includes address offset value elements that are each configured to store an address offset value. For each address offset value element, the apparatus can include address computation elements that each store a value used to determine the address offset value. One or more processors are configured to receive a program for performing computations using tensor elements of a tensor. The processor(s) can identify, in the program, a prologue or epilogue loop having a corresponding data array for storing values of the prologue or epilogue loop and populate, for a first address offset value element that corresponds to the prologue or epilogue loop, the address computation elements for the first address offset value element with respective values based at least on a number of iterations of the prologue or epilogue loop.
-
公开(公告)号:US10175980B2
公开(公告)日:2019-01-08
申请号:US15335769
申请日:2016-10-27
Applicant: Google LLC
Inventor: Olivier Temam , Ravi Narayanaswami , Harshit Khaitan , Dong Hyuk Woo
Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
-
公开(公告)号:US20230162015A1
公开(公告)日:2023-05-25
申请号:US17985061
申请日:2022-11-10
Applicant: Google LLC
Inventor: Olivier Temam , Harshit Khaitan , Ravi Narayanaswami , Dong Hyuk Woo
CPC classification number: G06N3/063 , G06F13/00 , G06N3/045 , G06N3/048 , G06F9/3887 , G06F9/3895 , G06F17/16
Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.
-
公开(公告)号:US20230004386A1
公开(公告)日:2023-01-05
申请号:US17892807
申请日:2022-08-22
Applicant: Google LLC
Inventor: Olivier Temam , Ravi Narayanaswami , Harshit Khaitan , Dong Hyuk Woo
Abstract: A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
-
公开(公告)号:US20220300421A1
公开(公告)日:2022-09-22
申请号:US17425918
申请日:2020-08-19
Applicant: Google LLC
Inventor: Suyog Gupta , Ravi Narayanaswami , Uday Kumar Dasari , Ali Iranli , Pavan Thirunagari , Vinu Vijay Kumar , Sunitha R. Kosireddy
IPC: G06F12/0802 , G06F3/06
Abstract: Components on an IC chip may operate faster or provide higher performance relative to power consumption if allowed access to sufficient memory resources. If every component is provided its own memory, however, the chip becomes expensive. In described implementations, memory is shared between two or more components. For example, a processing component can include computational circuitry and a memory coupled thereto. A multi-component cache controller is coupled to the memory. Logic circuitry is coupled to the cache controller and the memory. The logic circuitry selectively separates the memory into multiple memory partitions. A first memory partition can be allocated to the computational circuitry and provide storage to the computational circuitry. A second memory partition can be allocated to the cache controller and provide storage to multiple components. The relative capacities of the memory partitions are adjustable to accommodate fluctuating demands without dedicating individual memories to the components.
-
公开(公告)号:US20220245453A1
公开(公告)日:2022-08-04
申请号:US17629437
申请日:2020-10-07
Applicant: Google LLC
IPC: G06N3/08
Abstract: Methods, systems, and apparatus, including an apparatus for redistributing tensor elements among computing units are described. In one aspect, a method includes distributing tensor elements of an N-dimensional tensor among multiple computing units of a computation system. Each computing unit redistributes the subset of tensor elements previously distributed to the computing unit to computing units. Each computing unit accesses redistribution partitioning data that specifies, for each computing unit, the tensor elements that are to be stored by the computing unit after redistributing the tensor elements. For each tensor element previously distributed to the particular computing unit, the computing unit determines a global linearized index value for the tensor element based on a multi-dimensional index for the tensor element. The computing unit determines, using the redistribution partitioning data and the global linearized index value, a destination computing unit and sends the tensor element to the destination computing unit.
-
公开(公告)号:US11366877B2
公开(公告)日:2022-06-21
申请号:US16928242
申请日:2020-07-14
Applicant: Google LLC
Inventor: Ravi Narayanaswami , Rahul Nagarajan , Dong Hyuk Woo , Christopher Daniel Leary
Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
-
公开(公告)号:US20220147793A1
公开(公告)日:2022-05-12
申请号:US17570784
申请日:2022-01-07
Applicant: GOOGLE LLC
Inventor: Andreas Georg Nowatzyk , Olivier Temam , Ravi Narayanaswami , Uday Kumar Dasari
Abstract: A three dimensional neural network accelerator that includes a first neural network accelerator tile that includes a first transmission coil, and a second neural network accelerator tile that includes a second transmission coil, wherein the first neural network accelerator tile is adjacent to and aligned vertically with the second neural network accelerator tile, and wherein the first transmission coil is configured to wirelessly communicate with the second transmission coil via inductive coupling.
-
-
-
-
-
-
-
-
-