MATRIX PROCESSING APPARATUS
    21.
    发明申请

    公开(公告)号:US20210034697A1

    公开(公告)日:2021-02-04

    申请号:US16928242

    申请日:2020-07-14

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.

    Matrix processing apparatus
    22.
    发明授权

    公开(公告)号:US10719575B2

    公开(公告)日:2020-07-21

    申请号:US16571749

    申请日:2019-09-16

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including a system for transforming sparse elements to a dense matrix. The system is configured to receive a request for an output matrix based on sparse elements including sparse elements associated with a first dense matrix and sparse elements associated with a second dense matrix; obtain the sparse elements associated with the first dense matrix fetched by a first group of sparse element access units; obtain the sparse elements associated with the second dense matrix fetched by a second group of sparse element access units; and transform the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix to generate the output dense matrix that includes the sparse elements associated with the first dense matrix and the sparse elements associated with the second dense matrix.

    ON-CHIP INTERCONNECT FOR MEMORY CHANNEL CONTROLLERS

    公开(公告)号:US20250004956A1

    公开(公告)日:2025-01-02

    申请号:US18655653

    申请日:2024-05-06

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for an integrated circuit that accelerates machine-learning computations. The circuit includes processor cores that each include multiple channel controllers; an interface controller for coupling each channel controller to any memory channel of a system memory; and a fetch unit in each channel controller. Each fetch is configured to: receive channel data that encodes addressing information; obtain, based on the addressing information, data from any memory channel of the system memory using the interface controller; and write the obtained data to a vector memory of the processor core via the corresponding channel controller that includes the respective fetch unit.

    ACCELERATED EMBEDDING LAYER COMPUTATIONS
    24.
    发明公开

    公开(公告)号:US20240273363A1

    公开(公告)日:2024-08-15

    申请号:US18582294

    申请日:2024-02-20

    Applicant: Google LLC

    CPC classification number: G06N3/08 G06F1/03 G06N3/063 G06N20/10

    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing neural network computations using a system configured to implement a neural network on a hardware circuit. The system includes a host that receives a batch of inputs to a neural network layer. Each of the inputs is stored in a memory location identified by an address. The system identifies one or more duplicate addresses in a listing of addresses for one or more inputs. For each duplicate address: the system generates a unique identifier that identifies the duplicate address in the listing of addresses. The system (i) obtains first inputs from memory locations identified by addresses corresponding to the unique identifiers and (ii) generates an output of the layer from the obtained first inputs.

    Sparse SIMD cross-lane processing unit

    公开(公告)号:US11966745B2

    公开(公告)日:2024-04-23

    申请号:US17972663

    申请日:2022-10-25

    Applicant: Google LLC

    CPC classification number: G06F9/3887 G06F9/30036

    Abstract: Aspects of the disclosure are directed to a cross-lane processing unit (XPU) for performing data-dependent operations across multiple data processing lanes of a processor. Rather than implementing operation-specific circuits for each data-dependent operation, the XPU can be configured to perform different operations in response to input signals configuring individual operations performed by processing cells and crossbars arranged as a stacked network in the XPU. Each processing cell can receive and process data across multiple data processing lanes. Aspects of the disclosure include configuring the XPU to use a vector sort network to perform a duplicate count eliminating the need to configure the XPU separately for sorting and duplicate counting.

    ACCELERATED EMBEDDING LAYER COMPUTATIONS
    28.
    发明公开

    公开(公告)号:US20230376759A1

    公开(公告)日:2023-11-23

    申请号:US18305297

    申请日:2023-04-21

    Applicant: Google LLC

    CPC classification number: G06N3/08 G06N20/10 G06F1/03 G06N3/063

    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing neural network computations using a system configured to implement a neural network on a hardware circuit. The system includes a host that receives a batch of inputs to a neural network layer. Each of the inputs is stored in a memory location identified by an address. The system identifies one or more duplicate addresses in a listing of addresses for one or more inputs. For each duplicate address: the system generates a unique identifier that identifies the duplicate address in the listing of addresses. The system (i) obtains first inputs from memory locations identified by addresses corresponding to the unique identifiers and (ii) generates an output of the layer from the obtained first inputs.

    ON-CHIP INTERCONNECT FOR MEMORY CHANNEL CONTROLLERS

    公开(公告)号:US20220309011A1

    公开(公告)日:2022-09-29

    申请号:US17707849

    申请日:2022-03-29

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for an integrated circuit that accelerates machine-learning computations. The circuit includes processor cores that each include: multiple channel controllers; an interface controller for coupling each channel controller to any memory channel of a system memory; and a fetch unit in each channel controller. Each fetch is configured to: receive channel data that encodes addressing information; obtain, based on the addressing information, data from any memory channel of the system memory using the interface controller; and write the obtained data to a vector memory of the processor core via the corresponding channel controller that includes the respective fetch unit.

    Load balancing for memory channel controllers

    公开(公告)号:US11222258B2

    公开(公告)日:2022-01-11

    申请号:US16865539

    申请日:2020-05-04

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing neural network computations using a system configured to implement a neural network on a hardware circuit. The system includes a process ID unit that receives requests to obtain data from a memory that includes memory locations that are each identified by an address. For each request, the process ID unit selects a channel controller to receive the request, provides the request to be processed by the selected channel controller, and obtains the data from memory in response to processing the request using the selected channel controller. The channel controller is one of multiple channel controllers that are configured to access any memory location of the memory. The system performs the neural network computations using the data obtained from memory and resources allocated from a shared memory of the hardware circuit.

Patent Agency Ranking