PROCESSOR INSTRUCTIONS FOR DATA COMPRESSION AND DECOMPRESSION

    公开(公告)号:US20220197642A1

    公开(公告)日:2022-06-23

    申请号:US17133328

    申请日:2020-12-23

    Abstract: A processor that includes compression instructions to compress multiple adjacent data blocks of uncompressed read-only data stored in memory into one compressed read-only data block and store the compressed read-only data block in multiple adjacent blocks in the memory is provided. During execution of an application to operate on the read-only data, one of the multiple adjacent blocks storing the compressed read-only block is read from memory, stored in a prefetch buffer and decompressed in the memory controller. In response to a subsequent request during execution of the application for an adjacent data block in the compressed read-only data block, the uncompressed adjacent block is read directly from the prefetch buffer.

    TWO-LEVEL MAIN MEMORY HIERARCHY MANAGEMENT

    公开(公告)号:US20210216452A1

    公开(公告)日:2021-07-15

    申请号:US17214818

    申请日:2021-03-27

    Abstract: A two-level main memory in which both volatile memory and persistent memory are exposed to the operating system in a flat manner and data movement and management is performed in cache line granularity is provided. The operating system can allocate pages in the two-level main memory randomly across the first level main memory and the second level main memory in a memory-type agnostic manner, or, in a more intelligent manner by allocating predicted hot pages in first level main memory and predicted cold pages in second level main memory. The cache line granularity movement is performed in a “swap” manner, that is, a hot cache line in the second level main memory is swapped with a cold cache line in first level main memory because data is stored in either first level main memory or second level main memory not in both first level main memory and second level main memory.

    PROVIDING DEAD-BLOCK PREDICTION FOR DETERMINING WHETHER TO CACHE DATA IN CACHE DEVICES

    公开(公告)号:US20190050332A1

    公开(公告)日:2019-02-14

    申请号:US15996392

    申请日:2018-06-01

    Abstract: Provided are an apparatus and system to cache data in a first cache and a second cache that cache data from a shared memory in a local processor node, wherein the shared memory is accessible to at least one remote processor node. A cache controller writes a block to the second cache in response to determining that the block is more likely to be accessed by the local processor node than a remote processor node. The first cache controller writes the block to the shared memory in response to determining that the block is more likely to be accessed by the one of the at least one remote processor node than the local processor node without writing to the second cache.

    SELECTIVE DATA COMPRESSION/DECOMPRESSION FOR INTERMEMORY TRANSFER INTERFACE

    公开(公告)号:US20180095674A1

    公开(公告)日:2018-04-05

    申请号:US15283124

    申请日:2016-09-30

    CPC classification number: G06F3/0608 G06F3/064 G06F3/0679

    Abstract: In one embodiment, an inter-memory transfer interface having selective data compression/decompression in accordance with the present description, selects from multiple candidate processes, a compression/decompression process to compress a region of data from a near memory before transmitting the compressed data to the far memory. In another aspect, the inter-memory transfer interface stores metadata indicating the particular compression/decompression process selected to compress that region of data. The stored metadata may then be used to identify the compression/decompression technique selected to compress a particular region of data, for purposes of locating the compressed data and subsequently decompressing data of that region when read from the far memory. Other aspects are described herein.

Patent Agency Ranking