METHOD FOR CACHING AND MIGRATING DE-COMPRESSED PAGE

    公开(公告)号:US20240119001A1

    公开(公告)日:2024-04-11

    申请号:US18377597

    申请日:2023-10-06

    Applicant: Rambus Inc.

    CPC classification number: G06F12/0802

    Abstract: Disclosed are techniques for storing data decompressed from the compressed pages of a memory block when servicing data access request from a host device of a memory system to the compressed page data in which the memory block has been compressed into multiple compressed pages. A cache buffer may store the decompressed data for a few compressed pages to save decompression memory space. The memory system may keep track of the number of accesses to the decompressed data in the cache and the number of compressed pages that have been decompressed into the cache to calculate a metric associated with the frequency of access to the compressed pages within the memory block. If the metric does not exceed a threshold, additional compressed pages are decompressed into the cache. Otherwise, all the compressed pages within the memory block are decompressed into a separately allocated memory space to reduce data access latency.

    FLEXIBLE METADATA ALLOCATION AND CACHING
    2.
    发明公开

    公开(公告)号:US20240013819A1

    公开(公告)日:2024-01-11

    申请号:US18348716

    申请日:2023-07-07

    Applicant: Rambus Inc.

    CPC classification number: G11C7/1084 G11C7/1006

    Abstract: An apparatus and method for flexible metadata allocation and caching. In one embodiment of the method first and second requests are received from first and second applications, respectively, wherein the requests specify a reading of first and second data, respectively, from one or more memory devices. The circuit reads the first and second data in response to receiving the first and second requests. Receiving first and second metadata from the one or more memory devices in response to receiving the first and second requests. The first and second metadata correspond to the first and second data, respectively. The first and second data are equal in size, and the first and second metadata are unequal in size.

    MEMORY MODULE WITH MEMORY-OWNERSHIP EXCHANGE

    公开(公告)号:US20250103508A1

    公开(公告)日:2025-03-27

    申请号:US18812262

    申请日:2024-08-22

    Applicant: Rambus Inc.

    Abstract: Described are computational systems in which hosts share pooled memory on the same memory module. A memory buffer with access to the pooled memory manages which regions of the memory are allocated to the different hosts such that memory regions, and thus the data they contain, can be exchanged between hosts. Unidirectional or bidirectional data exchanges between hosts swap regions of equal size so the amount of memory allocated to each host is not changed as a result of the exchange.

    NON-DISRUPTIVE MEMORY MIGRATION
    4.
    发明公开

    公开(公告)号:US20240184476A1

    公开(公告)日:2024-06-06

    申请号:US18519345

    申请日:2023-11-27

    Applicant: Rambus Inc.

    CPC classification number: G06F3/065 G06F3/0611 G06F3/064 G06F3/0683

    Abstract: A memory pool controller accesses multiple tiers of memory. Characteristics that sort memory into tiers may include, for example, slow/fast/fastest, longer-latency/shorter-latency, local/remote, compressed/uncompressed, bandwidth, jitter, capacity, and persistence, or a combination thereof. The controller may select and migrate blocks of data (e.g., pages) from one tier of memory to another. The controller uses a pointer during blocks migrations to allow applications to access migrating blocks without stopping the running workload. The controller also monitors the access frequency of blocks so that less frequently accessed blocks may be selected for migration to lower performance tiers of memory and more frequently accessed blocks migrated to higher performance tiers.

    MEMORY DEVICE FLUSH BUFFER OPERATIONS
    5.
    发明公开

    公开(公告)号:US20240311301A1

    公开(公告)日:2024-09-19

    申请号:US18598142

    申请日:2024-03-07

    Applicant: Rambus Inc.

    CPC classification number: G06F12/0804

    Abstract: A dynamic random access memory (DRAM) device includes functions configured to aid with operating the DRAM device as part of data caching functions. In response to some write and/or read access commands, the DRAM device is configured to copy a cache line (e.g., dirty cache line) from the main DRAM memory array, place it in a flush buffer, and replace the copied cache line in the main DRAM memory array with a new (e.g., different) cache line of data. In response to conditions and/or events (e.g., explicit command, refresh, write-to-read command sequence, unused data bus bandwidth, full flush buffer, etc.) the DRAM device transmits the cache line from the flush buffer to the controller. The controller may then transmit the cache line to other cache levels.

    Interconnect based address mapping for improved reliability

    公开(公告)号:US12086060B2

    公开(公告)日:2024-09-10

    申请号:US17893790

    申请日:2022-08-23

    Applicant: Rambus Inc.

    CPC classification number: G06F12/06 G11C11/4087 G06F2212/1032

    Abstract: Row addresses received by a module are mapped before being received by the memory devices of the module such that row hammer affects different neighboring row addresses in each memory device. Thus, because the mapped respective, externally received, row addresses applied to each device ensure that each set of neighboring rows for a given row address received by the module is different for each memory device on the module, row hammering of a given externally addressed row spreads the row hammering errors across different externally addressed rows on each memory device. This has the effect of confining the row hammer errors for each row that is hammered to a single memory device per externally addressed neighboring row. By confining the row hammer errors to a single memory device, the row hammer errors are correctible using a SDDC scheme.

    MEMORY SYSTEM FOR SECURE READ AND WRITE OPERATIONS BASED ON PREDEFINED DATA PATTERNS

    公开(公告)号:US20240160388A1

    公开(公告)日:2024-05-16

    申请号:US18497860

    申请日:2023-10-30

    Applicant: Rambus Inc.

    CPC classification number: G06F3/0689 G06F3/0623 G06F3/0656 G06F3/0659

    Abstract: A memory buffer device facilitates secure read and write operations associated with data that includes a predefined data pattern. For read operations, the memory buffer device detects a read data pattern in the read data that matches a predefined data pattern. The memory buffer device may then generate a read response that includes metadata identifying the read data pattern without sending the read data itself. The memory buffer device may also receive Write Request without Data (RwoD) commands from the host that include metadata identifying a write data pattern. The memory buffer device identifies the associated data pattern and writes the data pattern or the metadata to the memory array. The memory buffer device may include encryption and decryption logic for communicating the metadata in encrypted form.

    DRAM Cache with Stacked, Heterogenous Tag and Data Dies

    公开(公告)号:US20240086325A1

    公开(公告)日:2024-03-14

    申请号:US18242344

    申请日:2023-09-05

    Applicant: Rambus Inc.

    CPC classification number: G06F12/0815 G06F12/123 G06F2212/305

    Abstract: A high-capacity cache memory is implemented by multiple heterogenous DRAM dies, including a dedicated tag-storage DRAM die architected for low-latency tag-address retrieval and thus rapid hit/miss determination, and one or more capacity-optimized cache-line DRAM dies that render a net cache-line storage capacity orders of magnitude beyond that of state-of-the art SRAM cache implementations. The tag-storage die serves double-duty in some implementations, yielding rapid tag hit/miss determination for cache-line read/write requests while also serving as a high-capacity snoop-filter in a memory-sharing multiprocessor environment.

Patent Agency Ranking