-
公开(公告)号:US20210034697A1
公开(公告)日:2021-02-04
申请号:US16928242
申请日:2020-07-14
Applicant: Google LLC
Inventor: Ravi Narayanaswami , Rahul Nagarajan , Dong Hyuk Woo , Christopher Daniel Leary
Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
-
公开(公告)号:US10719575B2
公开(公告)日:2020-07-21
申请号:US16571749
申请日:2019-09-16
Applicant: Google LLC
Inventor: Ravi Narayanaswami , Rahul Nagarajan , Dong Hyuk Woo , Christopher Daniel Leary
Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.
-
公开(公告)号:US20250004956A1
公开(公告)日:2025-01-02
申请号:US18655653
申请日:2024-05-06
Applicant: Google LLC
Inventor: Rahul Nagarajan , Hema Hariharan
Abstract: Methods, systems, and apparatus, including computer-readable media, are described for an integrated circuit that accelerates machine-learning computations. The circuit includes processor cores that each include multiple channel controllers; an interface controller for coupling each channel controller to any memory channel of a system memory; and a fetch unit in each channel controller. Each fetch is configured to: receive channel data that encodes addressing information; obtain, based on the addressing information, data from any memory channel of the system memory using the interface controller; and write the obtained data to a vector memory of the processor core via the corresponding channel controller that includes the respective fetch unit.
-
公开(公告)号:US20240273363A1
公开(公告)日:2024-08-15
申请号:US18582294
申请日:2024-02-20
Applicant: Google LLC
Inventor: Rahul Nagarajan , Lifeng Nai , George Kurian , Hema Hariharan
Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing neural network computations using a system configured to implement a neural network on a hardware circuit. The system includes a host that receives a batch of inputs to a neural network layer. Each of the inputs is stored in a memory location identified by an address. The system identifies one or more duplicate addresses in a listing of addresses for one or more inputs. For each duplicate address: the system generates a unique identifier that identifies the duplicate address in the listing of addresses. The system (i) obtains first inputs from memory locations identified by addresses corresponding to the unique identifiers and (ii) generates an output of the layer from the obtained first inputs.
-
公开(公告)号:US20240211413A1
公开(公告)日:2024-06-27
申请号:US18596835
申请日:2024-03-06
Applicant: Google LLC
Inventor: Rahul Nagarajan , Arpith Chacko Jacob , Suvinay Subramanian , Hema Hariharan
CPC classification number: G06F13/161 , G06F9/35 , G06F9/3869 , G06F9/522
Abstract: Generally disclosed herein is a hardware/software interface for asynchronous data movement between an off-core memory and a core-local memory, referred to as “stream transfers”, and a stream ordering model. The stream transfers allow software to more efficiently express common data-movement patterns, specifically ones seen in sparse workloads. Direct stream instructions that belong to a stream are processed in-order. For indirect stream instructions, offset elements in an offset list are processed in order. A sync flag is updated to indicate monotonic incremental progress for the stream.
-
公开(公告)号:US11977499B2
公开(公告)日:2024-05-07
申请号:US17722782
申请日:2022-04-18
Applicant: Google LLC
Inventor: Rahul Nagarajan , Arpith Chacko Jacob , Suvinay Subramanian , Hema Hariharan
CPC classification number: G06F13/161 , G06F9/35 , G06F9/3869 , G06F9/522
Abstract: Generally disclosed herein is a hardware/software interface for asynchronous data movement between an off-core memory and a core-local memory, referred to as “stream transfers”, and a stream ordering model. The stream transfers allow software to more efficiently express common data-movement patterns, specifically ones seen in sparse workloads. Direct stream instructions that belong to a stream are processed in-order. For indirect stream instructions, offset elements in an offset list are processed in order. A sync flag is updated to indicate monotonic incremental progress for the stream.
-
公开(公告)号:US11966745B2
公开(公告)日:2024-04-23
申请号:US17972663
申请日:2022-10-25
Applicant: Google LLC
Inventor: Rahul Nagarajan , Suvinay Subramanian , Arpith Chacko Jacob
CPC classification number: G06F9/3887 , G06F9/30036
Abstract: Aspects of the disclosure are directed to a cross-lane processing unit (XPU) for performing data-dependent operations across multiple data processing lanes of a processor. Rather than implementing operation-specific circuits for each data-dependent operation, the XPU can be configured to perform different operations in response to input signals configuring individual operations performed by processing cells and crossbars arranged as a stacked network in the XPU. Each processing cell can receive and process data across multiple data processing lanes. Aspects of the disclosure include configuring the XPU to use a vector sort network to perform a duplicate count eliminating the need to configure the XPU separately for sorting and duplicate counting.
-
公开(公告)号:US20230376759A1
公开(公告)日:2023-11-23
申请号:US18305297
申请日:2023-04-21
Applicant: Google LLC
Inventor: Rahul Nagarajan , Lifeng Nai , George Kurian , Hema Hariharan
Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing neural network computations using a system configured to implement a neural network on a hardware circuit. The system includes a host that receives a batch of inputs to a neural network layer. Each of the inputs is stored in a memory location identified by an address. The system identifies one or more duplicate addresses in a listing of addresses for one or more inputs. For each duplicate address: the system generates a unique identifier that identifies the duplicate address in the listing of addresses. The system (i) obtains first inputs from memory locations identified by addresses corresponding to the unique identifiers and (ii) generates an output of the layer from the obtained first inputs.
-
公开(公告)号:US20220309011A1
公开(公告)日:2022-09-29
申请号:US17707849
申请日:2022-03-29
Applicant: Google LLC
Inventor: Rahul Nagarajan , Hema Hariharan
Abstract: Methods, systems, and apparatus, including computer-readable media, are described for an integrated circuit that accelerates machine-learning computations. The circuit includes processor cores that each include: multiple channel controllers; an interface controller for coupling each channel controller to any memory channel of a system memory; and a fetch unit in each channel controller. Each fetch is configured to: receive channel data that encodes addressing information; obtain, based on the addressing information, data from any memory channel of the system memory using the interface controller; and write the obtained data to a vector memory of the processor core via the corresponding channel controller that includes the respective fetch unit.
-
公开(公告)号:US11222258B2
公开(公告)日:2022-01-11
申请号:US16865539
申请日:2020-05-04
Applicant: Google LLC
Inventor: Rahul Nagarajan , Hema Hariharan
Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing neural network computations using a system configured to implement a neural network on a hardware circuit. The system includes a process ID unit that receives requests to obtain data from a memory that includes memory locations that are each identified by an address. For each request, the process ID unit selects a channel controller to receive the request, provides the request to be processed by the selected channel controller, and obtains the data from memory in response to processing the request using the selected channel controller. The channel controller is one of multiple channel controllers that are configured to access any memory location of the memory. The system performs the neural network computations using the data obtained from memory and resources allocated from a shared memory of the hardware circuit.
-
-
-
-
-
-
-
-
-