COMPRESSION TECHNIQUES FOR DATA STRUCTURES SUITABLE FOR ARTIFICIAL NEURAL NETWORKS

    公开(公告)号:US20200373941A1

    公开(公告)日:2020-11-26

    申请号:US16426303

    申请日:2019-05-30

    Abstract: In artificial neural networks, and other similar applications, there is typically a large amount of data involved that is considered sparse data. Due to the large size of the data involved in such applications, it is helpful to compress the data to save bandwidth resources when transmitting the data and save memory resources when storing the data. Introduced herein is a compression technique that selects elements with significant values from data and restructures them into a structured sparse format. By generating metadata that enforces the structured sparse format and organizing the data according to the metadata, the introduced technique not only reduces the size of the data but also consistently places the data in a particular format. As such, hardware can be simplified and optimized to process the data much faster and much more efficiently than the conventional compression techniques that rely on a non-structured sparsity format.

    Efficient matrix data format applicable for artificial neural network

    公开(公告)号:US11249727B2

    公开(公告)日:2022-02-15

    申请号:US17073512

    申请日:2020-10-19

    Abstract: Many computing systems process data organized in a matrix format. For example, artificial neural networks (ANNs) perform numerous computations on data organized into matrices using conventional matrix arithmetic operations. One such operation, which is commonly performed, is the transpose operation. Additionally, many such systems need to process many matrices and/or matrices that are large in size. For sparse matrices that hold few significant values and many values that can be ignored, transmitting and processing all the values in such matrices is wasteful. Thus, techniques are introduced for storing a sparse matrix in a compressed format that allows for a matrix transpose operation to be performed on the compressed matrix without having to first decompress the compressed matrix. By utilizing the introduced techniques, more matrix operations can be performed than conventional systems.

    EFFICIENT MATRIX DATA FORMAT APPLICABLE FOR ARTIFICIAL NEURAL NETWORK

    公开(公告)号:US20210034332A1

    公开(公告)日:2021-02-04

    申请号:US17073512

    申请日:2020-10-19

    Abstract: Many computing systems process data organized in a matrix format. For example, artificial neural networks (ANNs) perform numerous computations on data organized into matrices using conventional matrix arithmetic operations. One such operation, which is commonly performed, is the transpose operation. Additionally, many such systems need to process many matrices and/or matrices that are large in size. For sparse matrices that hold few significant values and many values that can be ignored, transmitting and processing all the values in such matrices is wasteful. Thus, techniques are introduced for storing a sparse matrix in a compressed format that allows for a matrix transpose operation to be performed on the compressed matrix without having to first decompress the compressed matrix. By utilizing the introduced techniques, more matrix operations can be performed than conventional systems.

    EFFICIENT MATRIX DATA FORMAT APPLICABLE FOR ARTIFICIAL NEURAL NETWORK

    公开(公告)号:US20200272425A1

    公开(公告)日:2020-08-27

    申请号:US16287564

    申请日:2019-02-27

    Abstract: Many computing systems process data organized in a matrix format. For example, artificial neural networks (ANNs) perform numerous computations on data organized into matrices using conventional matrix arithmetic operations. One such operation, which is commonly performed, is the transpose operation. Additionally, many such systems need to process many matrices and/or matrices that are large in size. For sparse matrices that hold few significant values and many values that can be ignored, transmitting and processing all the values in such matrices is wasteful. Thus, techniques are introduced for storing a sparse matrix in a compressed format that allows for a matrix transpose operation to be performed on the compressed matrix without having to first decompress the compressed matrix. By utilizing the introduced techniques, more matrix operations can be performed than conventional systems.

    SPARSE MATRIX MULTIPLICATION IN A NEURAL NETWORK

    公开(公告)号:US20250045107A1

    公开(公告)日:2025-02-06

    申请号:US18240281

    申请日:2023-08-30

    Abstract: Apparatuses, systems, and methods to enable matrix multiplication acceleration by modifying an input to apply sparsity through sparse activation filtering. In at least one embodiment, a neural network modifies pixels within an image through sparse activation filtering to enable use of one or more matrix multiplication acceleration units to perform a sparse patch embedding operation.

    Compression techniques for data structures suitable for artificial neural networks

    公开(公告)号:US11489541B2

    公开(公告)日:2022-11-01

    申请号:US16426303

    申请日:2019-05-30

    Abstract: In artificial neural networks, and other similar applications, there is typically a large amount of data involved that is considered sparse data. Due to the large size of the data involved in such applications, it is helpful to compress the data to save bandwidth resources when transmitting the data and save memory resources when storing the data. Introduced herein is a compression technique that selects elements with significant values from data and restructures them into a structured sparse format. By generating metadata that enforces the structured sparse format and organizing the data according to the metadata, the introduced technique not only reduces the size of the data but also consistently places the data in a particular format. As such, hardware can be simplified and optimized to process the data much faster and much more efficiently than the conventional compression techniques that rely on a non-structured sparsity format.

    Efficient matrix format suitable for neural networks

    公开(公告)号:US11127167B2

    公开(公告)日:2021-09-21

    申请号:US16397034

    申请日:2019-04-29

    Abstract: Many computing systems process data organized in a matrix format. For example, artificial neural networks perform numerous computations on data organized into matrices using conventional matrix arithmetic operations. One such operation is the transpose operation. Techniques are introduced for storing a matrix in a compressed format that allows, for example, a transpose operation to be performed during decompression. Thus, by utilizing the introduced techniques, transformations of compressed matrices such transposition can be achieved in a more effective way. Parallel processing may also be used to more efficiently compress and/or decompress.

    Efficient matrix data format applicable for artificial neural network

    公开(公告)号:US10860293B2

    公开(公告)日:2020-12-08

    申请号:US16287564

    申请日:2019-02-27

    Abstract: Many computing systems process data organized in a matrix format. For example, artificial neural networks (ANNs) perform numerous computations on data organized into matrices using conventional matrix arithmetic operations. One such operation, which is commonly performed, is the transpose operation. Additionally, many such systems need to process many matrices and/or matrices that are large in size. For sparse matrices that hold few significant values and many values that can be ignored, transmitting and processing all the values in such matrices is wasteful. Thus, techniques are introduced for storing a sparse matrix in a compressed format that allows for a matrix transpose operation to be performed on the compressed matrix without having to first decompress the compressed matrix. By utilizing the introduced techniques, more matrix operations can be performed than conventional systems.

Patent Agency Ranking