Generating compressed data streams with lookback pre-fetch instructions for pre-fetching decompressed data from a lookback buffer

    公开(公告)号:US10120581B2

    公开(公告)日:2018-11-06

    申请号:US15085399

    申请日:2016-03-30

    Abstract: Aspects for generating compressed data streams with lookback pre-fetch instructions are disclosed. A data compression system is provided and configured to receive and compress an uncompressed data stream as part of a lookback-based compression scheme. The data compression system determines if a current data block was previously compressed. If so, the data compression system is configured to insert a lookback instruction corresponding to the current data block into the compressed data stream. Each lookback instruction includes a lookback buffer index that points to an entry in a lookback buffer where decompressed data corresponding to the data block will be stored during a separate decompression scheme. Once the data blocks have been compressed, the data compression system is configured to move a lookback buffer index of each lookback instruction in the compressed data stream into a lookback pre-fetch instruction located earlier than the corresponding lookback instruction in the compressed data stream.

    System, apparatus, and method for decompressing data
    2.
    发明授权
    System, apparatus, and method for decompressing data 有权
    用于解压缩数据的系统,装置和方法

    公开(公告)号:US09413386B1

    公开(公告)日:2016-08-09

    申请号:US14626905

    申请日:2015-02-19

    CPC classification number: H03M7/3088 H03M7/30 H03M7/3086 H03M7/6005

    Abstract: A system for data decompression may include a processor coupled to a remote memory having a remote dictionary stored thereon and coupled to a decompression logic having a local memory with a local dictionary. The processor may decompress data during execution by accessing the local dictionary, and if necessary, the remote dictionary.

    Abstract translation: 用于数据解压缩的系统可以包括耦合到具有存储在其上的远程字典的远程存储器的处理器,并且耦合到具有本地字典的本地存储器的解压缩逻辑。 处理器可以在执行期间通过访问本地字典来解压缩数据,并且如果需要,可以对远程字典进行解压缩。

    GENERATING COMPRESSED DATA STREAMS WITH LOOKBACK PRE-FETCH INSTRUCTIONS FOR PRE-FETCHING DECOMPRESSED DATA FROM A LOOKBACK BUFFER

    公开(公告)号:US20170285939A1

    公开(公告)日:2017-10-05

    申请号:US15085399

    申请日:2016-03-30

    Abstract: Aspects for generating compressed data streams with lookback pre-fetch instructions are disclosed. A data compression system is provided and configured to receive and compress an uncompressed data stream as part of a lookback-based compression scheme. The data compression system determines if a current data block was previously compressed. If so, the data compression system is configured to insert a lookback instruction corresponding to the current data block into the compressed data stream. Each lookback instruction includes a lookback buffer index that points to an entry in a lookback buffer where decompressed data corresponding to the data block will be stored during a separate decompression scheme. Once the data blocks have been compressed, the data compression system is configured to move a lookback buffer index of each lookback instruction in the compressed data stream into a lookback pre-fetch instruction located earlier than the corresponding lookback instruction in the compressed data stream.

    Neural processing unit (NPU) direct memory access (NDMA) memory bandwidth optimization

    公开(公告)号:US11295205B2

    公开(公告)日:2022-04-05

    申请号:US16147245

    申请日:2018-09-28

    Abstract: A neural processing unit (NPU) is described. The NPU includes an NPU direct memory access (NDMA) core. The NDMA core includes a read engine having a read buffer. The NDMA core also includes a write engine having a write buffer. The NPU also includes a controller. The controller is configured to direct the NDMA core to perform hardware memory bandwidth optimization for reading/writing NDMA data in the read buffer and/or NDMA data in the write buffer. The NDMA core is also configured to transparently combine NDMA transaction requests for a data stripe to increase local access to available tensors in artificial neural networks.

Patent Agency Ranking