Sparse matrix-vector multiplication

    公开(公告)号:US11995149B2

    公开(公告)日:2024-05-28

    申请号:US17125457

    申请日:2020-12-17

    CPC classification number: G06F17/16 G06F9/30098 G06F15/80

    Abstract: A processing system includes a first set and a second set of general-purpose registers (GPRs) and memory access circuitry that fetches nonzero values of a sparse matrix into consecutive slots in the first set. The memory access circuitry also fetches values of an expanded matrix into consecutive slots in the second set of GPRs. The expanded matrix is formed based on values of a vector and locations of the nonzero values in the sparse matrix. The processing system also includes a set of multipliers that concurrently perform multiplication of the nonzero values in slots of the first set of GPRs with the values of the vector in corresponding slots of the second set. Reduced sum circuitry accumulates results from the set of multipliers for rows of the sparse matrix.

    Dynamically adaptable arrays for vector and matrix operations

    公开(公告)号:US11409840B2

    公开(公告)日:2022-08-09

    申请号:US17032314

    申请日:2020-09-25

    Abstract: An array processor includes processor element arrays distributed in rows and columns. The processor element arrays perform operations on parameter values. The array processor also includes memory interfaces that are dynamically mapped to mutually exclusive subsets of the rows and columns of the processor element arrays based on dimensions of matrices that provide the parameter values to the processor element arrays. In some cases, the processor element arrays are vector arithmetic logic unit (ALU) processors and the memory interfaces are direct memory access (DMA) engines. The rows of the processor element arrays in the subsets are mutually exclusive to the rows in the other subsets and the columns of the processor element arrays in the subsets are mutually exclusive to the columns in the other subsets. The matrices can be symmetric or asymmetric, e.g., one of the matrices can be a vector having a single column.

    VERTICAL AND HORIZONTAL BROADCAST OF SHARED OPERANDS

    公开(公告)号:US20230289191A1

    公开(公告)日:2023-09-14

    申请号:US18128642

    申请日:2023-03-30

    CPC classification number: G06F9/3887 G06F13/28 G06F13/4027

    Abstract: An array processor includes processor element arrays distributed in rows and columns. The processor element arrays perform operations on parameter values. The array processor also includes memory interfaces that broadcast sets of the parameter values to mutually exclusive subsets of the rows and columns of the processor element arrays. In some cases, the array processor includes single-instruction-multiple-data (SIMD) units including subsets of the processor element arrays in corresponding rows, workgroup processors (WGPs) including subsets of the SIMD units, and a memory fabric configured to interconnect with an external memory that stores the parameter values. The memory interfaces broadcast the parameter values to the SIMD units that include the processor element arrays in rows associated with the memory interfaces and columns of processor element arrays that are implemented across the SIMD units in the WGPs. The memory interfaces access the parameter values from the external memory via the memory fabric.

    Vertical and horizontal broadcast of shared operands

    公开(公告)号:US11635967B2

    公开(公告)日:2023-04-25

    申请号:US17032307

    申请日:2020-09-25

    Abstract: An array processor includes processor element arrays distributed in rows and columns. The processor element arrays perform operations on parameter values. The array processor also includes memory interfaces that broadcast sets of the parameter values to mutually exclusive subsets of the rows and columns of the processor element arrays. In some cases, the array processor includes single-instruction-multiple-data (SIMD) units including subsets of the processor element arrays in corresponding rows, workgroup processors (WGPs) including subsets of the SIMD units, and a memory fabric configured to interconnect with an external memory that stores the parameter values. The memory interfaces broadcast the parameter values to the SIMD units that include the processor element arrays in rows associated with the memory interfaces and columns of processor element arrays that are implemented across the SIMD units in the WGPs. The memory interfaces access the parameter values from the external memory via the memory fabric.

    DEVICE AND METHOD FOR ACCELERATING MATRIX MULTIPLY OPERATIONS

    公开(公告)号:US20210209192A1

    公开(公告)日:2021-07-08

    申请号:US17208526

    申请日:2021-03-22

    Abstract: A processing device is provided which comprises memory configured to store data and a plurality of processor cores in communication with each other via first and second hierarchical communication links. Processor cores of a first hierarchical processor core group are in communication with each other via the first hierarchical communication links and are configured to store, in the memory, a sub-portion of data of a first matrix and a sub-portion of data of a second matrix. The processor cores are also configured to determine a product of the sub-portion of data of the first matrix and the sub-portion of data of the second matrix, receive, from another processor core, another sub-portion of data of the second matrix and determine a product of the sub-portion of data of the first matrix and the other sub-portion of data of the second matrix.

    COMPUTATIONAL OPTICS
    6.
    发明申请

    公开(公告)号:US20190094658A1

    公开(公告)日:2019-03-28

    申请号:US15790825

    申请日:2017-10-23

    Abstract: A system and method for controlling characteristics of collected image data are disclosed. The system and method include performing pre-processing of an image using GPUs, configuring an optic based on the pre-processing, the configuring being designed to account for features of the pre-processed image, acquiring an image using the configured optic, processing the acquired image using GPUs, and determining if the processed acquired image accounts for feature of the pre-processed image, and the determination is affirmative, outputting the image, wherein if the determination is negative repeating the configuring of the optic and re-acquiring the image.

    Low latency long short-term memory inference with sequence interleaving

    公开(公告)号:US11769041B2

    公开(公告)日:2023-09-26

    申请号:US16177218

    申请日:2018-10-31

    CPC classification number: G06N3/063 G06F7/5443 G06F17/16 G06N20/00

    Abstract: Systems, apparatuses, and methods for implementing a low latency long short-term memory (LSTM) machine learning engine using sequence interleaving techniques are disclosed. A computing system includes at least a host processing unit, a machine learning engine, and a memory. The host processing unit detects a plurality of sequences which will be processed by the machine learning engine. The host processing unit interleaves the sequences into data blocks and stores the data blocks in the memory. When the machine learning engine receives a given data block, the machine learning engine performs, in parallel, a plurality of matrix multiplication operations on the plurality of sequences in the given data block and a plurality of coefficients. Then, the outputs of the matrix multiplication operations are coupled to one or more LSTM layers.

    BROADCAST SYNCHRONIZATION FOR DYNAMICALLY ADAPTABLE ARRAYS

    公开(公告)号:US20220197655A1

    公开(公告)日:2022-06-23

    申请号:US17548105

    申请日:2021-12-10

    Abstract: An array processor includes processor element arrays (PEAs) distributed in rows and columns. The PEAs are configured to perform operations on parameter values. A first sequencer received a first direct memory access (DMA) instruction that includes a request to read data from at least one address in memory. A texture address (TA) engine requests the data from the memory based on the at least one address and a texture data (TD) engine provides the data to the PEAs. The PEAs provide first synchronization signals to the TD engine to indicate availability of registers for receiving the data. The TD engine provides second synchronization signals to the first sequencer in response to receiving acknowledgments that the PEAs have consumed the data.

    COMPUTATIONAL OPTICS
    9.
    发明申请

    公开(公告)号:US20210149276A1

    公开(公告)日:2021-05-20

    申请号:US17133151

    申请日:2020-12-23

    Abstract: A system and method for controlling characteristics of collected image data are disclosed. The system and method include performing pre-processing of an image using GPUs, configuring an optic based on the pre-processing, the configuring being designed to account for features of the pre-processed image, acquiring an image using the configured optic, processing the acquired image using GPUs, and determining if the processed acquired image accounts for feature of the pre-processed image, and the determination is affirmative, outputting the image, wherein if the determination is negative repeating the configuring of the optic and re-acquiring the image.

    Method and apparatus for performing processing in a camera

    公开(公告)号:US10728446B2

    公开(公告)日:2020-07-28

    申请号:US15816537

    申请日:2017-11-17

    Abstract: A method and apparatus of performing processing in an image capturing device includes receiving an image by the image capturing device. The image is filtered to generate a first visible light component and a second infrared component. A decontamination is performed on the infrared component to generate a decontaminated infrared component, and an interpolation is performed on the visible component to generate an interpolated visible component, both of which are provided to an image signal processor (ISP) for further processing.

Patent Agency Ranking