DISTRIBUTED HARDWARE TRACING
    21.
    发明申请

    公开(公告)号:US20190332509A1

    公开(公告)日:2019-10-31

    申请号:US16411569

    申请日:2019-05-14

    Applicant: Google LLC

    Abstract: A computer-implemented method executed by one or more processors, the method includes monitoring execution of program code executed by a first processor component; and monitoring execution of program code executed by a second processor component. A computing system stores data identifying hardware events in a memory buffer. The stored events occur across processor units that include at least the first and second processor components. The hardware events each include an event time stamp and metadata characterizing the event. The system generates a data structure identifying the hardware events. The data structure arranges the events in a time ordered sequence and associates events with at least the first or second processor components. The system stores the data structure in a memory bank of a host device and uses the data structure to analyze performance of the program code executed by the first or second processor components.

    Vector reductions using shared scratchpad memory

    公开(公告)号:US11934826B2

    公开(公告)日:2024-03-19

    申请号:US17530869

    申请日:2021-11-19

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing vector reductions using a shared scratchpad memory of a hardware circuit having processor cores that communicate with the shared memory. For each of the processor cores, a respective vector of values is generated based on computations performed at the processor core. The shared memory receives the respective vectors of values from respective resources of the processor cores using a direct memory access (DMA) data path of the shared memory. The shared memory performs an accumulation operation on the respective vectors of values using an operator unit coupled to the shared memory. The operator unit is configured to accumulate values based on arithmetic operations encoded at the operator unit. A result vector is generated based on performing the accumulation operation using the respective vectors of values.

    Synchronous hardware event collection

    公开(公告)号:US11921611B2

    公开(公告)日:2024-03-05

    申请号:US17571373

    申请日:2022-01-07

    Applicant: Google LLC

    Abstract: A computer-implemented method that includes monitoring execution of program code by first and second processor components. A computing system detects that a trigger condition is satisfied by: i) identifying an operand in a portion of the program code; or ii) determining that a current time of a clock of the computing system indicates a predefined time value. The operand and the predefined time value are used to initiate trace events. When the trigger condition is satisfied the system initiates trace events that generate trace data identifying respective hardware events occurring across the computing system. The system uses the trace data to generate a correlated set of trace data. The correlated trace data indicates a time ordered sequence of the respective hardware events. The system uses the correlated set of trace data to analyze performance of the executing program code.

    DIRECT MEMORY ACCESS ARCHITECTURE WITH MULTI-LEVEL MULTI-STRIDING

    公开(公告)号:US20240070098A1

    公开(公告)日:2024-02-29

    申请号:US18229616

    申请日:2023-08-02

    Applicant: Google LLC

    CPC classification number: G06F13/28 G06F1/04

    Abstract: DMA architectures capable of performing multi-level multi-striding and determining multiple memory addresses in parallel are described. In one aspect, a DMA system includes one or more hardware DMA threads. Each DMA thread includes a request generator configured to generate, during each parallel memory address computation cycle, m memory addresses for a multi-dimensional tensor in parallel and, for each memory address, a respective request for a memory system to perform a memory operation. The request generator includes m memory address units that each include a step tracker configured to generate, for each dimension of the tensor, a respective step index value for the dimension and, based on the respective step index value, a respective stride offset value for the dimension. Each memory address unit includes a memory address computation element configured to generate a memory address for a tensor element and transmit the request to perform the memory operation.

    Direct memory access architecture with multi-level multi-striding

    公开(公告)号:US11762793B2

    公开(公告)日:2023-09-19

    申请号:US17728478

    申请日:2022-04-25

    Applicant: Google LLC

    CPC classification number: G06F13/28 G06F1/04

    Abstract: DMA architectures capable of performing multi-level multi-striding and determining multiple memory addresses in parallel are described. In one aspect, a DMA system includes one or more hardware DMA threads. Each DMA thread includes a request generator configured to generate, during each parallel memory address computation cycle, m memory addresses for a multi-dimensional tensor in parallel and, for each memory address, a respective request for a memory system to perform a memory operation. The request generator includes m memory address units that each include a step tracker configured to generate, for each dimension of the tensor, a respective step index value for the dimension and, based on the respective step index value, a respective stride offset value for the dimension. Each memory address unit includes a memory address computation element configured to generate a memory address for a tensor element and transmit the request to perform the memory operation.

    Vector processing unit
    26.
    发明授权

    公开(公告)号:US11520581B2

    公开(公告)日:2022-12-06

    申请号:US17327957

    申请日:2021-05-24

    Applicant: Google LLC

    Abstract: A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.

    DIRECT MEMORY ACCESS ARCHITECTURE WITH MULTI-LEVEL MULTI-STRIDING

    公开(公告)号:US20220327075A1

    公开(公告)日:2022-10-13

    申请号:US17728478

    申请日:2022-04-25

    Applicant: Google LLC

    Abstract: DMA architectures capable of performing multi-level multi-striding and determining multiple memory addresses in parallel are described. In one aspect, a DMA system includes one or more hardware DMA threads. Each DMA thread includes a request generator configured to generate, during each parallel memory address computation cycle, m memory addresses for a multi-dimensional tensor in parallel and, for each memory address, a respective request for a memory system to perform a memory operation. The request generator includes m memory address units that each include a step tracker configured to generate, for each dimension of the tensor, a respective step index value for the dimension and, based on the respective step index value, a respective stride offset value for the dimension. Each memory address unit includes a memory address computation element configured to generate a memory address for a tensor element and transmit the request to perform the memory operation.

    VECTOR PROCESSING UNIT
    28.
    发明申请

    公开(公告)号:US20210357212A1

    公开(公告)日:2021-11-18

    申请号:US17327957

    申请日:2021-05-24

    Applicant: Google LLC

    Abstract: A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.

    DIRECT MEMORY ACCESS ARCHITECTURE WITH MULTI-LEVEL MULTI-STRIDING

    公开(公告)号:US20210255976A1

    公开(公告)日:2021-08-19

    申请号:US16838796

    申请日:2020-04-02

    Applicant: Google LLC

    Abstract: DMA architectures capable of performing multi-level multi-striding and determining multiple memory addresses in parallel are described. In one aspect, a DMA system includes one or more hardware DMA threads. Each DMA thread includes a request generator configured to generate, during each parallel memory address computation cycle, m memory addresses for a multi-dimensional tensor in parallel and, for each memory address, a respective request for a memory system to perform a memory operation. The request generator includes m memory address units that each include a step tracker configured to generate, for each dimension of the tensor, a respective step index value for the dimension and, based on the respective step index value, a respective stride offset value for the dimension. Each memory address unit includes a memory address computation element configured to generate a memory address for a tensor element and transmit the request to perform the memory operation.

    Neural network processor
    30.
    发明授权

    公开(公告)号:US11049016B2

    公开(公告)日:2021-06-29

    申请号:US16824411

    申请日:2020-03-19

    Applicant: Google LLC

    Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.

Patent Agency Ranking