-
公开(公告)号:US09959498B1
公开(公告)日:2018-05-01
申请号:US15336216
申请日:2016-10-27
Applicant: Google LLC
Inventor: Ravi Narayanaswami , Dong Hyuk Woo , Olivier Temam , Harshit Khaitan
IPC: G06N3/04
CPC classification number: G06N3/04 , G06F13/28 , G06N3/0454 , G06N3/063
Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
-
公开(公告)号:US20240289285A1
公开(公告)日:2024-08-29
申请号:US18505662
申请日:2023-11-09
Applicant: Google LLC
Inventor: Dong Hyuk Woo , Ravi Narayanaswami
IPC: G06F13/16 , G06F9/38 , G06F15/76 , G06F17/16 , G06N3/045 , G06N3/063 , G06N3/08 , G06N3/10 , G06N5/04 , G06N20/00 , G06N20/10
CPC classification number: G06F13/1668 , G06F9/38 , G06F15/76 , G06F17/16 , G06N3/045 , G06N3/063 , G06N3/08 , G06N3/10 , G06N5/04 , G06N20/00 , G06N20/10 , Y02D10/00
Abstract: A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.
-
公开(公告)号:US11948060B2
公开(公告)日:2024-04-02
申请号:US17570784
申请日:2022-01-07
Applicant: GOOGLE LLC
Inventor: Andreas Georg Nowatzyk , Olivier Temam , Ravi Narayanaswami , Uday Kumar Dasari
Abstract: A three dimensional neural network accelerator that includes a first neural network accelerator tile that includes a first transmission coil, and a second neural network accelerator tile that includes a second transmission coil, wherein the first neural network accelerator tile is adjacent to and aligned vertically with the second neural network accelerator tile, and wherein the first transmission coil is configured to wirelessly communicate with the second transmission coil via inductive coupling.
-
公开(公告)号:US11727259B2
公开(公告)日:2023-08-15
申请号:US17985061
申请日:2022-11-10
Applicant: Google LLC
Inventor: Olivier Temam , Harshit Khaitan , Ravi Narayanaswami , Dong Hyuk Woo
CPC classification number: G06N3/063 , G06F9/3887 , G06F9/3895 , G06F13/00 , G06F17/16 , G06N3/045 , G06N3/048
Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.
-
公开(公告)号:US20220391472A1
公开(公告)日:2022-12-08
申请号:US17842420
申请日:2022-06-16
Applicant: Google LLC
Inventor: Ravi Narayanaswami , Rahul Nagarajan , Dong Hyuk Woo , Christopher Daniel Leary
Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
-
公开(公告)号:US11106606B2
公开(公告)日:2021-08-31
申请号:US16514562
申请日:2019-07-17
Applicant: Google LLC
Inventor: Dong Hyuk Woo , Ravi Narayanaswami
IPC: G06F13/16 , G06N20/00 , G06N3/10 , G06F15/76 , G06F9/38 , G06N3/04 , G06N20/10 , G06F17/16 , G06N5/04 , G06N3/063 , G06N3/08
Abstract: A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.
-
公开(公告)号:US20210256361A1
公开(公告)日:2021-08-19
申请号:US17186598
申请日:2021-02-26
Applicant: Google LLC
Inventor: Uday Kumar Dasari , Olivier Temam , Ravi Narayanaswami , Dong Hyuk Woo
Abstract: Apparatus and methods for processing neural network models are provided. The apparatus can comprise a plurality of identical artificial intelligence processing dies. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies can include at least one inter-die input block and at least one inter-die output block. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies is communicatively coupled to another artificial intelligence processing die among the plurality of identical artificial intelligence processing dies by way of one or more communication paths from the at least one inter-die output block of the artificial intelligence processing die to the at least one inter-die input block of the artificial intelligence processing die. Each artificial intelligence processing die among the plurality of identical artificial intelligence processing dies corresponds to at least one layer of a neural network.
-
公开(公告)号:US20200117696A1
公开(公告)日:2020-04-16
申请号:US16159450
申请日:2018-10-12
Applicant: Google LLC
Inventor: Anand Suresh Kane , Ravi Narayanaswami
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for a circuit configured to add multiple inputs. The circuit includes a first adder section that receives a first input and a second input and adds the inputs to generate a first sum. The circuit also includes a second adder section that receives the first and second inputs and adds the inputs to generate a second sum. An input processor of the circuit receives the first and second inputs, determines whether a relationship between the first and second inputs satisfies a set of conditions, and selects a high-power mode of the adder circuit or a low-power mode of the adder circuit using the determined relationship between the first and second inputs. The high-power mode is selected and the first and second inputs are routed to the second adder section when the relationship satisfies the set of conditions.
-
公开(公告)号:US20200027195A1
公开(公告)日:2020-01-23
申请号:US16531876
申请日:2019-08-05
Applicant: Google LLC
Inventor: Carrell Daniel Killebrew , Ravi Narayanaswami , Dong Hyuk Woo
Abstract: Methods, systems, and apparatus, including an apparatus for determining pixel coordinates for image transformation and memory addresses for storing the transformed image data. In some implementations, a system includes a processing unit configured to perform machine learning computations for images using a machine learning model and pixel values for the images, a storage medium configured to store the pixel values for the images, and a memory address computation unit that includes one or more hardware processors. The processor(s) are configured to receive image data for an image and determine that the dimensions of the image do not match the dimensions of the machine learning model. In response, the processor(s) determine pixel coordinates for a transformed version of the input image and, for each of the pixel coordinates, memory address(es), in the storage medium, for storing pixel value(s) that will be used to generate an input to the machine learning model.
-
公开(公告)号:US10534578B1
公开(公告)日:2020-01-14
申请号:US16113410
申请日:2018-08-27
Applicant: Google LLC
Inventor: Ravi Narayanaswami
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for a circuit configured to perform computations using multiple inputs. The circuit includes multiple adder circuits and a selection circuit that includes multiple input selector. Each adder circuit performs an addition operation using sets of inputs derived from the multiple inputs. The input selectors are configured to select one or more inputs from a set of inputs derived from the multiple inputs based on a sign bit for an input in the set and pass the selected inputs to an adder circuit that generates a sum using the selected inputs. The circuit determines a routing of the sum to another adder circuit based in part on a sign bit for the input in the set of inputs.
-
-
-
-
-
-
-
-
-