-
公开(公告)号:US12147351B2
公开(公告)日:2024-11-19
申请号:US18139220
申请日:2023-04-25
Applicant: Rambus Inc.
Inventor: Evan Lawrence Erickson , Christopher Haywood , Mark D. Kellam
IPC: G06F12/10 , G06F12/0804 , G06F12/0882 , G06F12/1009 , G06F12/123 , G06F13/16
Abstract: Memory pages are background-relocated from a low-latency local operating memory of a server computer to a higher-latency memory installation that enables high-resolution access monitoring and thus access-demand differentiation among the relocated memory pages. Higher access-demand memory pages are background-restored to the low-latency operating memory, while lower access-demand pages are maintained in the higher latency memory installation and yet-lower access-demand pages are optionally moved to yet higher-latency memory installation.
-
12.
公开(公告)号:US20240345735A1
公开(公告)日:2024-10-17
申请号:US18681716
申请日:2022-08-08
Applicant: Rambus Inc.
Inventor: Brent Steven Haukness , Christopher Haywood , Torsten Partsch , Thomas Vogelsang
IPC: G06F3/06
CPC classification number: G06F3/0611 , G06F3/0659 , G06F3/0673
Abstract: Memory devices, modules, controllers, systems and associated methods are disclosed. In one embodiment, a dynamic random access memory (DRAM) device is disclosed. The DRAM device includes memory core circuitry including an array of DRAM storage cells organized into bank groups. Each bank group includes multiple banks, where each of the multiple banks includes addressable columns of DRAM storage cells. The DRAM device includes signal interface circuitry having dedicated write data path circuitry and dedicated read data path circuitry. Selector circuitry, for a first memory transaction, selectively couples at least one of the addressable columns of DRAM storage cells to the dedicated read data path circuitry or the dedicated write data path circuitry.
-
公开(公告)号:US12050787B2
公开(公告)日:2024-07-30
申请号:US17650643
申请日:2022-02-10
Applicant: Rambus, Inc.
Inventor: Shih-ho Wu , Christopher Haywood
CPC classification number: G06F3/0625 , G06F3/0659 , G06F3/0679 , G11C14/0063 , G11C16/0408
Abstract: The present invention is directed to computer storage systems and methods thereof. In an embodiment, a memory system comprises a controller module, a nonvolatile memory, and a volatile memory. The controller module operates according to a command and operation table. The command and operation table can be updated to change the way controller module operates. When the command and operation table is updated, the updated table is stored at a predefined location of the nonvolatile memory. There are other embodiments as well.
-
14.
公开(公告)号:US20240212743A1
公开(公告)日:2024-06-27
申请号:US18555714
申请日:2022-04-14
Applicant: Rambus, Inc.
Inventor: Christopher Haywood
IPC: G11C11/4093 , G11C11/4076 , G11C11/4096 , G11C29/42
CPC classification number: G11C11/4093 , G11C11/4076 , G11C11/4096 , G11C29/42
Abstract: Technologies for concurrent interface operations of integrated circuit memory devices are described. An integrated circuit memory device includes an input port, a control port, and an output port. The input port receives interleaved input and a first timing reference. The interleaved input includes one or more commands or write data. The control port receives one or more control signals that specify that the interleaved input is the one or more commands or the write data. The output port transmits read data and a second timing reference. The commands or write data can be received concurrently with transmitting the read data.
-
公开(公告)号:US20240119001A1
公开(公告)日:2024-04-11
申请号:US18377597
申请日:2023-10-06
Applicant: Rambus Inc.
Inventor: Taeksang Song , Christopher Haywood , Evan Lawrence Erickson
IPC: G06F12/0802
CPC classification number: G06F12/0802
Abstract: Disclosed are techniques for storing data decompressed from the compressed pages of a memory block when servicing data access request from a host device of a memory system to the compressed page data in which the memory block has been compressed into multiple compressed pages. A cache buffer may store the decompressed data for a few compressed pages to save decompression memory space. The memory system may keep track of the number of accesses to the decompressed data in the cache and the number of compressed pages that have been decompressed into the cache to calculate a metric associated with the frequency of access to the compressed pages within the memory block. If the metric does not exceed a threshold, additional compressed pages are decompressed into the cache. Otherwise, all the compressed pages within the memory block are decompressed into a separately allocated memory space to reduce data access latency.
-
公开(公告)号:US11854658B2
公开(公告)日:2023-12-26
申请号:US17696818
申请日:2022-03-16
Applicant: Rambus Inc.
Inventor: Christopher Haywood , David Wang
CPC classification number: G11C7/1072 , G06F11/073 , G06F11/0778 , G06F11/0787 , G06F11/1044 , G06F11/1048 , G06F11/1068 , G11C7/1006 , G06F11/1008 , G11C5/04 , G11C29/52 , G11C2029/0411
Abstract: A method for operating a DRAM device. The method includes receiving in a memory buffer in a first memory module hosted by a computing system, a request for data stored in RAM of the first memory module from a host controller of the computing system. The method includes receiving with the memory buffer, the data associated with a RAM, in response to the request and formatting with the memory buffer, the data into a scrambled data in response to a pseudo-random process. The method includes initiating with the memory buffer, transfer of the scrambled data into an interface device.
-
公开(公告)号:US20230376412A1
公开(公告)日:2023-11-23
申请号:US18030971
申请日:2021-10-11
Applicant: RAMBUS INC.
Inventor: Evan Lawrence Erickson , Christopher Haywood
IPC: G06F12/02
CPC classification number: G06F12/0292 , G06F12/023 , G06F2212/154
Abstract: An integrated circuit device includes a first memory to support address translation between local addresses and fabric addresses and a processing circuit, operatively coupled to the first memory. The processing circuit allocates, on a dynamic basis as a donor, a portion of first local memory of a local server as first far memory for access for a first remote server, or as a requester receives allocation of second far memory from the first remote server or a second remote server for access by the local server. The processing circuit bridges the access by the first remote server to the allocated portion of first local memory as the first far memory, through the fabric addresses and the address translation supported by the first memory, or bridge the access by the local server to the second far memory, through the address translation supported by the first memory, and the fabric addresses.
-
公开(公告)号:US11335430B2
公开(公告)日:2022-05-17
申请号:US16823908
申请日:2020-03-19
Applicant: Rambus Inc.
Inventor: Christopher Haywood
Abstract: Many error correction schemes fail to correct for double-bit errors and a module must be replaced when these double-bit errors occur repeatedly at the same address. This helps prevent data corruption. In an embodiment, the addresses for one of the memory devices exhibiting a single-bit error (but not the other also exhibiting a single bit error) is transformed before the internal memory arrays are accessed. This has the effect of moving one of the error prone memory cells to a different external (to the module) address such that there is only one error prone bit that is accessed by the previously double-bit error prone address. Thus, a double-bit error at the original address is remapped into two correctable single-bit errors that are at different addresses.
-
公开(公告)号:US20200250090A1
公开(公告)日:2020-08-06
申请号:US16652234
申请日:2018-10-03
Applicant: Rambus Inc.
Inventor: Frederick A. Ware , John Eric Linstadt , Christopher Haywood
IPC: G06F12/0804 , G06F12/12
Abstract: A hybrid volatile/non-volatile memory module employs a relatively fast, durable, and expensive dynamic, random-access memory (DRAM) cache to store a subset of data from a larger amount of relatively slow and inexpensive nonvolatile memory (NVM). A module controller prioritizes accesses to the DRAM cache for improved speed performance and to minimize programming cycles to the NVM. Data is first written to the DRAM cache where it can be accessed (written to and read from) without the aid of the NVM. Data is only written to the NVM when that data is evicted from the DRAM cache to make room for additional data. Mapping tables relating NVM addresses to physical addresses are distributed throughout the DRAM cache using cache line bits that are not used for data.
-
公开(公告)号:US10607669B2
公开(公告)日:2020-03-31
申请号:US15978344
申请日:2018-05-14
Applicant: Rambus Inc.
Inventor: Christopher Haywood , David Wang
Abstract: A method for operating a DRAM device. The method includes receiving in a memory buffer in a first memory module hosted by a computing system, a request for data stored in RAM of the first memory module from a host controller of the computing system. The method includes receiving with the memory buffer, the data associated with a RAM, in response to the request and formatting with the memory buffer, the data into a scrambled data in response to a pseudo-random process. The method includes initiating with the memory buffer, transfer of the scrambled data into an interface device.
-
-
-
-
-
-
-
-
-