Winograd algorithm on a matrix processing architecture

    公开(公告)号:US10482155B2

    公开(公告)日:2019-11-19

    申请号:US15395542

    申请日:2016-12-30

    Abstract: In one embodiment, a matrix operation may be performed, wherein the matrix operation comprises a matrix multiplication operation on a plurality of matrix operands. Matrix data may be received from a multi-dimensional memory, wherein the matrix data is associated with the plurality of matrix operands. The plurality of matrix operands may be extracted from the matrix data, wherein the plurality of matrix operands comprises a first matrix operand and a second matrix operand. A first transform may be performed on the first matrix operand to obtain a transformed matrix operand, wherein performing matrix multiplication using the transformed matrix operand is faster than performing matrix multiplication using the first matrix operand. Matrix multiplication may be performed on the transformed matrix operand to obtain a partial result. A second transform may be performed on the partial result to obtain a result of the matrix multiplication operation.

    Distributed matrix multiplication for neural networks

    公开(公告)号:US10169296B2

    公开(公告)日:2019-01-01

    申请号:US15395527

    申请日:2016-12-30

    Abstract: In one embodiment, a matrix operation associated with a plurality of input matrices may be performed. The plurality of input matrices may be partitioned into a plurality of input partitions, wherein the plurality of input matrices is partitioned based on a number of available processing elements. The plurality of input partitions may be distributed among a plurality of processing elements, wherein each input partition is distributed to a particular processing element of the plurality of processing elements. A plurality of partial matrix operations may be performed using the plurality of processing elements, and partial matrix data may be transmitted between the plurality of processing elements while performing the plurality of partial matrix operations. A result of the matrix operation may be determined based on the plurality of partial matrix operations.

    DISTRIBUTED CONVOLUTION FOR NEURAL NETWORKS

    公开(公告)号:US20220121954A1

    公开(公告)日:2022-04-21

    申请号:US17564098

    申请日:2021-12-28

    Abstract: In one embodiment, a matrix operation may be performed using a plurality of input matrices, wherein the matrix operation is associated with one or more convolution operations. The plurality of input matrices may be partitioned into a plurality of input partitions, wherein the plurality of input matrices is partitioned based on a number of available processing elements. The plurality of input partitions may be distributed among a plurality of processing elements, wherein each input partition is distributed to a particular processing element of the plurality of processing elements. A plurality of partial matrix operations may be performed using the plurality of processing elements, and partial matrix data may be transmitted between the plurality of processing elements while performing the plurality of partial matrix operations. A result of the matrix operation may be determined based on the plurality of partial matrix operations.

    Programmable matrix processing engine

    公开(公告)号:US10896039B2

    公开(公告)日:2021-01-19

    申请号:US16264483

    申请日:2019-01-31

    Abstract: In one embodiment, a matrix operation may be performed on one or more matrix operands. For example, matrix data may be received from a multi-dimensional memory, wherein the matrix data is associated with the one or more matrix operands. The one or more matrix operands may be extracted from the matrix data. A matrix routine associated with the matrix operation may be identified. The matrix routine may be executed on a matrix processor using the one or more matrix operands. A result of the matrix operation may be obtained based on the matrix routine executed by the matrix processor.

    DISTRIBUTED CONVOLUTION FOR NEURAL NETWORKS
    8.
    发明申请

    公开(公告)号:US20180189652A1

    公开(公告)日:2018-07-05

    申请号:US15395675

    申请日:2016-12-30

    CPC classification number: G06N3/084 G06F17/153 G06F17/16 G06N3/0454 G06N3/063

    Abstract: In one embodiment, a matrix operation may be performed using a plurality of input matrices, wherein the matrix operation is associated with one or more convolution operations. The plurality of input matrices may be partitioned into a plurality of input partitions, wherein the plurality of input matrices is partitioned based on a number of available processing elements. The plurality of input partitions may be distributed among a plurality of processing elements, wherein each input partition is distributed to a particular processing element of the plurality of processing elements. A plurality of partial matrix operations may be performed using the plurality of processing elements, and partial matrix data may be transmitted between the plurality of processing elements while performing the plurality of partial matrix operations. A result of the matrix operation may be determined based on the plurality of partial matrix operations.

    WINOGRAD ALGORITHM ON A MATRIX PROCESSING ARCHITECTURE

    公开(公告)号:US20180189237A1

    公开(公告)日:2018-07-05

    申请号:US15395542

    申请日:2016-12-30

    CPC classification number: G06F17/16 G06F15/80 G06F17/144 G06F17/153

    Abstract: In one embodiment, a matrix operation may be performed, wherein the matrix operation comprises a matrix multiplication operation on a plurality of matrix operands. Matrix data may be received from a multi-dimensional memory, wherein the matrix data is associated with the plurality of matrix operands. The plurality of matrix operands may be extracted from the matrix data, wherein the plurality of matrix operands comprises a first matrix operand and a second matrix operand. A first transform may be performed on the first matrix operand to obtain a transformed matrix operand, wherein performing matrix multiplication using the transformed matrix operand is faster than performing matrix multiplication using the first matrix operand. Matrix multiplication may be performed on the transformed matrix operand to obtain a partial result. A second transform may be performed on the partial result to obtain a result of the matrix multiplication operation.

    PIPELINED CONVOLUTIONAL OPERATIONS FOR PROCESSING CLUSTERS

    公开(公告)号:US20170097884A1

    公开(公告)日:2017-04-06

    申请号:US14874784

    申请日:2015-10-05

    CPC classification number: G06F12/023 G06F15/76 G06F2212/251 G06T1/20

    Abstract: Described herein are one or more integrated circuits (ICs) comprising controller circuitry to receive a command to execute an operation for data inputs stored in an external memory or a local memory, and convert the operation into a set of matrix operations to operate on sub-portions of the data inputs. The IC(s) further comprise at least one processing circuitry to execute the set of matrix operations, the processing circuitry to include ALUs, a local memory external to the ALUs and accessible by the ALUs, and processing control circuitry to create at least one matrix operand in the local memory (from the data inputs of the operation) comprising at least one of a scalar, a vector, or a 2D matrix, and provide memory handles corresponding to each of the matrix operands to one of the ALUs to access the respective matrix operands when executing a matrix operation.

Patent Agency Ranking