REDUCED LATENCY METADATA ENCRYPTION AND DECRYPTION

    公开(公告)号:US20250047469A1

    公开(公告)日:2025-02-06

    申请号:US18669731

    申请日:2024-05-21

    Applicant: Rambus Inc.

    Abstract: Techniques for providing reduced latency metadata encryption and decryption are described herein. A memory buffer device having a cryptographic circuit to receive a first data and a first metadata associated with the first data. The cryptographic circuit can encrypt or decrypt the first metadata using a first cryptographic algorithm. The cryptographic circuit can encrypt or decrypt the first data using a second cryptographic algorithm. The first data and the first metadata can be stored at a same location, within a memory device, corresponding to a memory address.

    MEMORY SYSTEM FOR FLEXIBLY ALLOCATING COMPRESSED STORAGE

    公开(公告)号:US20250110670A1

    公开(公告)日:2025-04-03

    申请号:US18887285

    申请日:2024-09-17

    Applicant: Rambus Inc.

    Abstract: A memory system enables a host device to flexibly allocate compressed storage managed by a memory buffer device. The host device allocates a first block of host-visible addresses associated with the compressed region and a memory buffer device allocates a corresponding second block of host-visible memory. The host device may migrate uncompressed data to and from compressed storage by referencing an address in the second block (with compression and decompression managed by the memory buffer device) and may migrate compressed data to and from compressed storage (bypassing compression and decompression on the memory buffer device) by instead referencing an address in the first block.

    Compression via deallocation
    5.
    发明授权

    公开(公告)号:US12204446B2

    公开(公告)日:2025-01-21

    申请号:US18140441

    申请日:2023-04-27

    Applicant: Rambus Inc.

    Abstract: A buffer/interface device of a memory node reads a block of data (e.g., page). As each unit of data (e.g., cache line sized) of the block is read, it is compared against one or more predefined patterns (e.g., all 0's, all 1's, etc.). If the block (page) is only storing one of the predefined patterns, a flag in the page table entry for the block is set to indicate the block is only storing one of the predefined patterns. The physical memory the block was occupying may then be deallocated so other data may be stored using those physical memory addresses.

    METHOD FOR CACHING AND MIGRATING DE-COMPRESSED PAGE

    公开(公告)号:US20240119001A1

    公开(公告)日:2024-04-11

    申请号:US18377597

    申请日:2023-10-06

    Applicant: Rambus Inc.

    CPC classification number: G06F12/0802

    Abstract: Disclosed are techniques for storing data decompressed from the compressed pages of a memory block when servicing data access request from a host device of a memory system to the compressed page data in which the memory block has been compressed into multiple compressed pages. A cache buffer may store the decompressed data for a few compressed pages to save decompression memory space. The memory system may keep track of the number of accesses to the decompressed data in the cache and the number of compressed pages that have been decompressed into the cache to calculate a metric associated with the frequency of access to the compressed pages within the memory block. If the metric does not exceed a threshold, additional compressed pages are decompressed into the cache. Otherwise, all the compressed pages within the memory block are decompressed into a separately allocated memory space to reduce data access latency.

    A FAR MEMORY ALLOCATOR FOR DATA CENTER STRANDED MEMORY

    公开(公告)号:US20230376412A1

    公开(公告)日:2023-11-23

    申请号:US18030971

    申请日:2021-10-11

    Applicant: RAMBUS INC.

    CPC classification number: G06F12/0292 G06F12/023 G06F2212/154

    Abstract: An integrated circuit device includes a first memory to support address translation between local addresses and fabric addresses and a processing circuit, operatively coupled to the first memory. The processing circuit allocates, on a dynamic basis as a donor, a portion of first local memory of a local server as first far memory for access for a first remote server, or as a requester receives allocation of second far memory from the first remote server or a second remote server for access by the local server. The processing circuit bridges the access by the first remote server to the allocated portion of first local memory as the first far memory, through the fabric addresses and the address translation supported by the first memory, or bridge the access by the local server to the second far memory, through the address translation supported by the first memory, and the fabric addresses.

    REDUNDANT DATA LOG RETRIEVAL IN MULTI-PROCESSOR DEVICE

    公开(公告)号:US20230161599A1

    公开(公告)日:2023-05-25

    申请号:US17968488

    申请日:2022-10-18

    Applicant: Rambus Inc.

    CPC classification number: G06F9/4406 H04L67/141

    Abstract: A device includes interface circuitry to receive requests from at least one host system, a primary processor coupled to the interface circuitry, and a secure processor coupled to the primary processor. In response to a failure of the primary processor, the secure processor is to: verify a log retrieval command received via the interface circuitry, wherein the log retrieval command is cryptographically signed; in response to the verification, retrieve crash dump data stored in memory that is accessible by the primary processor; generate a log file that comprises the retrieved crash dump data; and cause the log file to be transmitted to the at least one host system over a sideband link that is coupled externally to the interface circuitry.

    MULTI-PROCESSOR DEVICE WITH EXTERNAL INTERFACE FAILOVER

    公开(公告)号:US20250110917A1

    公开(公告)日:2025-04-03

    申请号:US18919053

    申请日:2024-10-17

    Applicant: Rambus Inc.

    Abstract: A multi-processor device is disclosed. The multi-processor device includes interface circuitry to receive requests from at least one host device. A primary processor is coupled to the interface circuitry to process the requests in the absence of a failure event associated with the primary processor. A secondary processor processes operations on behalf of the primary processor and selectively receives the requests from the interface circuitry based on detection of the failure event associated with the primary processor.

Patent Agency Ranking