Unsuccessful write retry buffer
    31.
    发明授权

    公开(公告)号:US11409672B2

    公开(公告)日:2022-08-09

    申请号:US16872681

    申请日:2020-05-12

    Applicant: Rambus Inc.

    Abstract: A memory module includes at least two memory devices. Each of the memory devices perform verify operations after attempted writes to their respective memory cores. When a write is unsuccessful, each memory device stores information about the unsuccessful write in an internal write retry buffer. The write operations may have only been unsuccessful for one memory device and not any other memory devices on the memory module. When the memory module is instructed, both memory devices on the memory module can retry the unsuccessful memory write operations concurrently. Both devices can retry these write operations concurrently even though the unsuccessful memory write operations were to different addresses.

    Cache Memory That Supports Tagless Addressing

    公开(公告)号:US20190102318A1

    公开(公告)日:2019-04-04

    申请号:US16149553

    申请日:2018-10-02

    Applicant: Rambus Inc.

    Abstract: The disclosed embodiments relate to a computer system with a cache memory that supports tagless addressing. During operation, the system receives a request to perform a memory access, wherein the request includes a virtual address. In response to the request, the system performs an address-translation operation, which translates the virtual address into both a physical address and a cache address. Next, the system uses the physical address to access one or more levels of physically addressed cache memory, wherein accessing a given level of physically addressed cache memory involves performing a tag-checking operation based on the physical address. If the access to the one or more levels of physically addressed cache memory fails to hit on a cache line for the memory access, the system uses the cache address to directly index a cache memory, wherein directly indexing the cache memory does not involve performing a tag-checking operation and eliminates the tag storage overhead.

    Methods and apparatuses for addressing memory caches

    公开(公告)号:US10102140B2

    公开(公告)日:2018-10-16

    申请号:US15393232

    申请日:2016-12-28

    Applicant: RAMBUS INC.

    Abstract: A cache memory includes cache lines to store information. The stored information is associated with physical addresses that include first, second, and third distinct portions. The cache lines are indexed by the second portions of respective physical addresses associated with the stored information. The cache memory also includes one or more tables, each of which includes respective table entries that are indexed by the first portions of the respective physical addresses. The respective table entries in each of the one or more tables are to store indications of the second portions of respective physical addresses associated with the stored information.

    Virtualized cache memory
    34.
    发明授权
    Virtualized cache memory 有权
    虚拟缓存内存

    公开(公告)号:US09507731B1

    公开(公告)日:2016-11-29

    申请号:US14512254

    申请日:2014-10-10

    Applicant: Rambus Inc.

    Abstract: A memory address and a virtual cache identifier are received in association with a request to retrieve data from a cache data array. Context information is selected based on the virtual cache identifier, the context information indicating a first region of a plurality of regions within the cache data array. A cache line address that includes a first number of bits of the memory address in accordance with a size of the first region is generated and, if the cache data array is determined to contain, in a location indicated by the cache line address, a cache line corresponding to the memory address, the cache line is retrieved from the location indicated by the cache line address.

    Abstract translation: 与从缓存数据阵列检索数据的请求相关联地接收存储器地址和虚拟高速缓存标识符。 基于虚拟高速缓存标识符来选择上下文信息,上下文信息指示高速缓存数据阵列内的多个区域的第一区域。 生成包括根据第一区域的大小的存储器地址的第一位数的高速缓存行地址,并且如果高速缓存数据阵列被确定为包含在由高速缓存线地址指示的位置中的高速缓存 对应于存储器地址的行,从由高速缓存行地址指示的位置检索高速缓存行。

    Remapping memory cells based on future endurance measurements
    35.
    发明授权
    Remapping memory cells based on future endurance measurements 有权
    基于未来的耐久性测量重新映射存储单元

    公开(公告)号:US09442838B2

    公开(公告)日:2016-09-13

    申请号:US14058081

    申请日:2013-10-18

    Applicant: Rambus Inc.

    Abstract: A method of operating a memory device that includes groups of memory cells is presented. The groups include a first group of memory cells. Each one of the groups has a respective physical address and is initially associated with a respective logical address. The device also includes an additional group of memory cells that has a physical address but is not initially associated with a logical address. In the method, a difference in the future endurance between the first group of memory cells and the additional group of memory cells is identified. When the difference in the future endurance between the first group and the additional group exceeds a predetermined threshold difference, the association between the first group and the logical address initially associated with the first group is ended and the additional group is associated with the logical address that was initially associated with the first group.

    Abstract translation: 提出了一种操作包括存储器单元组的存储器件的方法。 这些组包括第一组记忆单元。 组中的每一个具有相应的物理地址,并且最初与相应的逻辑地址相关联。 该设备还包括具有物理地址但不是最初与逻辑地址相关联的附加组的存储器单元。 在该方法中,识别第一组存储器单元和附加的存储单元组之间的未来耐久性的差异。 当第一组和附加组之间的未来耐久性的差异超过预定阈值差时,第一组和最初与第一组相关联的逻辑地址之间的关联结束,并且附加组与逻辑地址相关联, 最初与第一组有关。

    MEMORY MODULE THREADING WITH STAGGERED DATA TRANSFERS
    36.
    发明申请
    MEMORY MODULE THREADING WITH STAGGERED DATA TRANSFERS 审中-公开
    存储器模块与分段数据传输

    公开(公告)号:US20140047155A1

    公开(公告)日:2014-02-13

    申请号:US13963391

    申请日:2013-08-09

    Applicant: Rambus Inc.

    Abstract: A method of transferring data between a memory controller and at least one memory module via a primary data bus having a primary data bus width is disclosed. The method includes accessing a first one of a memory device group via a corresponding data bus path in response to a threaded memory request from the memory controller. The accessing results in data groups collectively forming a first data thread transferred across a corresponding secondary data bus path. Transfer of the first data thread across the primary data bus width is carried out over a first time interval, while using less than the primary data transfer continuous throughput during that first time interval. During the first time interval, at least one data group from a second data thread is transferred on the primary data bus.

    Abstract translation: 公开了一种通过具有主数据总线宽度的主数据总线在存储器控制器和至少一个存储器模块之间传送数据的方法。 该方法包括响应于来自存储器控制器的螺纹存储器请求经由对应的数据总线路径访问存储器件组中的第一个。 访问导致数据组共同形成通过对应的辅助数据总线路径传送的第一数据线程。 第一数据线程跨越主数据总线宽度的传输是在第一时间间隔内执行的,而在该第一时间间隔期间使用少于主数据传输连续吞吐量。 在第一时间间隔期间,在主数据总线上传送来自第二数据线程的至少一个数据组。

    METHODS AND APPARATUSES FOR ADDRESSING MEMORY CACHES
    37.
    发明申请
    METHODS AND APPARATUSES FOR ADDRESSING MEMORY CACHES 有权
    解决存储器高速缓存的方法和设备

    公开(公告)号:US20130332668A1

    公开(公告)日:2013-12-12

    申请号:US14001464

    申请日:2012-02-22

    Applicant: Rambus Inc.

    Abstract: A cache memory includes cache lines to store information. The stored information is associated with physical addresses that include first, second, and third distinct portions. The cache lines are indexed by the second portions of respective physical addresses associated with the stored information. The cache memory also includes one or more tables, each of which includes respective table entries that are indexed by the first portions of the respective physical addresses. The respective table entries in each of the one or more tables are to store indications of the second portions of respective physical addresses associated with the stored information.

    Abstract translation: 缓存存储器包括用于存储信息的高速缓存行。 存储的信息与包括第一,第二和第三不同部分的物理地址相关联。 高速缓存线由与存储的信息相关联的相应物理地址的第二部分索引。 高速缓冲存储器还包括一个或多个表,每个表包括由相应物理地址的第一部分索引的各个表条目。 一个或多个表中的每一个中的相应表条目是存储与存储的信息相关联的相应物理地址的第二部分的指示。

    Cache memory that supports tagless addressing

    公开(公告)号:US12124382B2

    公开(公告)日:2024-10-22

    申请号:US17992443

    申请日:2022-11-22

    Applicant: Rambus Inc.

    CPC classification number: G06F12/1063 G06F12/0802 G06F12/1009 G06F12/1054

    Abstract: The disclosed embodiments relate to a computer system with a cache memory that supports tagless addressing. During operation, the system receives a request to perform a memory access, wherein the request includes a virtual address. In response to the request, the system performs an address-translation operation, which translates the virtual address into both a physical address and a cache address. Next, the system uses the physical address to access one or more levels of physically addressed cache memory, wherein accessing a given level of physically addressed cache memory involves performing a tag-checking operation based on the physical address. If the access to the one or more levels of physically addressed cache memory fails to hit on a cache line for the memory access, the system uses the cache address to directly index a cache memory, wherein directly indexing the cache memory does not involve performing a tag-checking operation and eliminates the tag storage overhead.

Patent Agency Ranking