-
公开(公告)号:US12205013B1
公开(公告)日:2025-01-21
申请号:US17009483
申请日:2020-09-01
Applicant: Amazon Technologies, Inc.
Inventor: Thiam Khean Hah , Randy Renfu Huang , Richard John Heaton , Ron Diamant , Vignesh Vivekraja
Abstract: Accelerated convolution of neural networks can be performed by executing N computing engines (CEs) of a neural network processor in parallel. An input dataset can be divided spatially into N chunks such that a respective last portion of each chunk overlaps with a respective first portion of a subsequent chunk. Portions of each chunk can be processed by a respective CE to generate a respective portion of an output dataset. The overlapping intermediate states computed by each CE from processing the overlapping portion can be stored locally for sharing with a subsequent CE using an on-chip bus.
-
公开(公告)号:US12182695B1
公开(公告)日:2024-12-31
申请号:US18474129
申请日:2023-09-25
Applicant: Amazon Technologies, Inc.
Inventor: Paul Gilbert Meyer , Thiam Khean Hah , Randy Renfu Huang , Ron Diamant , Vignesh Vivekraja
Abstract: A systolic array can implement an architecture tailored to perform matrix multiplications on sparse matrices. Each processing element in the systolic array may include a register configured to store a value, and a multiplexor configured to select an input element from multiple input data buses based on metadata associated with the value. Each processing element may also include a multiplier configured to multiply the selected input element with the value to generate a multiplication result, and an adder configured to add the multiplication result to a partial sum input to generate a partial sum output.
-
公开(公告)号:US12141468B1
公开(公告)日:2024-11-12
申请号:US17875805
申请日:2022-07-28
Applicant: Amazon Technologies, Inc.
Inventor: Kun Xu , Paul Gilbert Meyer , Ron Diamant
Abstract: In one example, an apparatus comprises: a memory array having an array of memory elements arranged in rows and columns, each memory element being configured to store a data element; and a memory access circuit configured to: perform a row write operation to store a first group of data elements at a first row of the array of memory elements; perform a column read operation at a first column of the array of memory elements to obtain a second group of data elements; and perform a column write operation to store a third group of data elements at the first column of the array of memory elements to replace the second group of data elements.
-
公开(公告)号:US12093806B1
公开(公告)日:2024-09-17
申请号:US16459501
申请日:2019-07-01
Applicant: Amazon Technologies, Inc.
Inventor: Jindrich Zejda , Ron Diamant , Jeffrey T. Huynh , Drazen Borkovic , Randy Renfu Huang , Richard John Heaton
Abstract: Static memory allocation may be performed for weight values across multiple processing units executing a neural network. A neural network may be received for execution across multiple processing units. A partitioning scheme may be applied to divide the neural network into subgraphs. The subgraphs may be assigned to different processing units. The weights for the operations of the subgraph may be statically allocated in dedicated caches for the processing units as part of the instructions to execute the neural network across the processing units.
-
公开(公告)号:US12073199B2
公开(公告)日:2024-08-27
申请号:US16433786
申请日:2019-06-06
Applicant: Amazon Technologies, Inc.
Inventor: Vignesh Vivekraja , Randy Renfu Huang , Yu Zhou , Ron Diamant , Richard John Heaton
CPC classification number: G06F8/4441 , G06N3/04 , G06N3/10
Abstract: In various implementations, provided are systems and methods for reducing neural network processing. A compiler may generate instructions from source code for a neural network having a repeatable set of operations. The instructions may include a plurality of blocks. The compiler may add an overwrite instruction to the plurality of blocks that, when executed by one or more execution engines, triggers an overwrite action. The overwrite action causes the instructions of subsequent blocks to be overwritten with NOP instructions. The overwrite action is triggered only when a condition is satisfied.
-
公开(公告)号:US12026607B1
公开(公告)日:2024-07-02
申请号:US17964291
申请日:2022-10-12
Applicant: Amazon Technologies, Inc.
Inventor: Jeffrey T. Huynh , Ron Diamant
CPC classification number: G06N3/063 , G06F15/8046 , G06N3/02
Abstract: A neural network accelerator executes instructions to: load a first weight data element of an array of weight data elements from a memory into a systolic array; extract, from the instructions, information indicating a first number of input data elements to be obtained from a first address of the memory and a second number of input data elements to be skipped between adjacent input data elements to be obtained, the first address being based on first coordinates of the first weight data element, and the first and second numbers being based on a stride of a convolution operation; based on the information, obtain first input data elements from the first address of the memory; and control the systolic array to perform first computations based on the first weight data element and the first input data elements to generate first output data elements of an output data array.
-
公开(公告)号:US11960997B1
公开(公告)日:2024-04-16
申请号:US17570673
申请日:2022-01-07
Applicant: Amazon Technologies, Inc.
Inventor: Randy Huang , Ron Diamant
Abstract: Disclosed herein are techniques for classifying data with a data processing circuit. In one embodiment, the data processing circuit includes a probabilistic circuit configurable to generate a decision at a pre-determined probability, and an output generation circuit including an output node and configured to receive input data and a weight, and generate output data at the output node for approximating a product of the input data and the weight. The generation of the output data includes propagating the weight to the output node according a first decision of the probabilistic circuit. The probabilistic circuit is configured to generate the first decision at a probability determined based on the input data.
-
公开(公告)号:US20240111528A1
公开(公告)日:2024-04-04
申请号:US17934147
申请日:2022-09-21
Applicant: Amazon Technologies, Inc.
Inventor: Xiaodan Tan , Paul Gilbert Meyer , Sheng Xu , Ron Diamant
CPC classification number: G06F9/30036 , G06F9/30145 , G06F9/3555
Abstract: A technique to execute transpose and compute operations may include retrieving a set of machine instructions from an instruction buffer of a data processor. The instruction buffer has multiple entries, and each entry stores one machine instruction. A machine instruction from the set of machine instructions is executed to transpose a submatrix of an input tensor and perform computations on column elements of the submatrix. The machine instruction combines the transpose operation with computational operations into a single machine instruction.
-
公开(公告)号:US11868895B2
公开(公告)日:2024-01-09
申请号:US18154576
申请日:2023-01-13
Applicant: Amazon Technologies, Inc.
Inventor: Randy Renfu Huang , Ron Diamant , Richard John Heaton
Abstract: A computer-implemented method includes receiving a neural network model that includes a tensor operation, dividing the tensor operation into a set of sub-operations, and generating instructions for performing a plurality of sub-operations of the set of sub-operations on respective computing engines of a plurality of computing engines on a same integrated circuit device or on different integrated circuit devices. Each sub-operation of the set of sub-operations generates a portion of a final output of the tensor operation. An inference is made based on a result of a sub-operation of the plurality of sub-operations, or based on results of the plurality of sub-operations.
-
公开(公告)号:US11775268B1
公开(公告)日:2023-10-03
申请号:US17341762
申请日:2021-06-08
Applicant: Amazon Technologies, Inc.
Inventor: Preston Pengra Briggs , Ron Diamant , Robert Geva
CPC classification number: G06F8/41 , G06F8/441 , G06F9/30123 , G06F12/0646 , G06F2212/1024
Abstract: A compiler-implemented technique for performing a storage allocation is described. Computer code to be converted into machine instructions for execution on an integrated circuit device is received. The integrated circuit device includes a memory having a set of memory locations. Based on the computer code, a set of values that are to be stored on the integrated circuit device are determined. An interference graph that includes the set of values and a set of interferences is constructed. While traversing the interference graph, a set of memory location assignments are generated by assigning the set of values to the set of memory locations in accordance with one or more color selection schemes.
-
-
-
-
-
-
-
-
-