STACKED DEVICE COMMUNICATION
    1.
    发明公开

    公开(公告)号:US20240241670A1

    公开(公告)日:2024-07-18

    申请号:US18427191

    申请日:2024-01-30

    Applicant: Rambus Inc.

    Abstract: An interconnected stack of one or more Dynamic Random Access Memory (DRAM) die has a base logic die and one or more custom logic or processor die. The processor logic die snoops commands sent to and through the stack. In particular, the processor logic die may snoop mode setting commands (e.g., mode register set—MRS commands). At least one mode setting command that is ignored by the DRAM in the stack is used to communicate a command to the processor logic die. In response the processor logic die may prevent commands, addresses, and data from reaching the DRAM die(s). This enables the processor logic die to send commands/addresses and communicate data with the DRAM die(s). While being able to send commands/addresses and communicate data with the DRAM die(s), the processor logic die may execute software using the DRAM die(s) for program and/or data storage and retrieval.

    ROW HAMMER MITIGATION
    2.
    发明公开

    公开(公告)号:US20240119989A1

    公开(公告)日:2024-04-11

    申请号:US18375810

    申请日:2023-10-02

    Applicant: Rambus Inc.

    CPC classification number: G11C11/40618 G11C11/40615 G11C11/408

    Abstract: Row hammer is mitigated by issuing, to a memory device, mitigation operation (MOP) commands in order to cause the refresh of rows at a specified vicinity of a suspected aggressor row. These mitigation operation commands are each associated with respective row addresses that indicate the suspected aggressor row and an indicator of which neighbor row in the vicinity of the suspected aggressor row is to be refreshed. The mitigation operation commands are issued in response to a fixed number of activate commands. The suspected aggressor row is selected by randomly choosing, with equal probability, one of the N previous activate commands to supply its associated row address as the suspected aggressor row address. The neighbor row may be selected randomly with a probability that diminishes inversely with the distance between the suspected aggressor row and the neighbor row.

    ENERGY EFFICIENT STORAGE OF ERROR-CORRECTION-DETECTION INFORMATION

    公开(公告)号:US20230297474A1

    公开(公告)日:2023-09-21

    申请号:US18130810

    申请日:2023-04-04

    Applicant: Rambus Inc.

    Abstract: Data and error correction information may involve accessing multiple data channels (e.g., 8) and one error detection and correction channel concurrently. This technique requires a total of N+1 row requests for each access, where N is the number of data channels (e.g., 8 data row accesses and 1 error detection and correction row access equals 9 row accesses.) A single (or at least less than N) data channel row may be accessed concurrently with a single error detection and correction row. This reduces the number of row requests to two (2)—one for the data and one for the error detection and correction information. Because, row requests consume power, reducing the number of row requests is more power efficient.

    DRAM CACHE TAG PROBING
    4.
    发明申请

    公开(公告)号:US20240394195A1

    公开(公告)日:2024-11-28

    申请号:US18665319

    申请日:2024-05-15

    Applicant: Rambus Inc.

    Abstract: A dynamic random access memory (DRAM) device includes functions configured to aid with operating the DRAM device as part of data caching functions. The DRAM is configured to respond to at least two types of commands. A first type of command (cache data access command) seeks to access a cache line of data, if present in the DRAM cache. A second type of command (cache probe command) seeks to determine whether a cache line of data is present, but is not requesting the data be returned in response. In response to these types of access commands, the DRAM device is configured to receive cache tag query values and to compare stored cache tag values with the cache tag query values. A hit/miss (HM) interface/bus may indicate the result of the cache tag compare and stored cache line status bits to a controller.

    ENERGY EFFICIENT STORAGE OF ERROR-CORRECTION-DETECTION INFORMATION

    公开(公告)号:US20220327021A1

    公开(公告)日:2022-10-13

    申请号:US17734464

    申请日:2022-05-02

    Applicant: Rambus Inc.

    Abstract: Data and error correction information may involve accessing multiple data channels (e.g., 8) and one error detection and correction channel concurrently. This technique requires a total of N+1 row requests for each access, where N is the number of data channels (e.g., 8 data row accesses and 1 error detection and correction row access equals 9 row accesses.) A single (or at least less than N) data channel row may be accessed concurrently with a single error detection and correction row. This reduces the number of row requests to two (2)—one for the data and one for the error detection and correction information. Because, row requests consume power, reducing the number of row requests is more power efficient.

    ENERGY EFFICIENT STORAGE OF ERROR-CORRECTION-DETECTION INFORMATION

    公开(公告)号:US20240354191A1

    公开(公告)日:2024-10-24

    申请号:US18649031

    申请日:2024-04-29

    Applicant: Rambus Inc.

    Abstract: Data and error correction information may involve accessing multiple data channels (e.g., 8) and one error detection and correction channel concurrently. This technique requires a total of N+1 row requests for each access, where N is the number of data channels (e.g., 8 data row accesses and 1 error detection and correction row access equals 9 row accesses.) A single (or at least less than N) data channel row may be accessed concurrently with a single error detection and correction row. This reduces the number of row requests to two (2)—one for the data and one for the error detection and correction information. Because, row requests consume power, reducing the number of row requests is more power efficient.

    BLOCK COPY
    7.
    发明申请

    公开(公告)号:US20220083224A1

    公开(公告)日:2022-03-17

    申请号:US17461105

    申请日:2021-08-30

    Applicant: Rambus Inc.

    Abstract: An interconnected stack of one or more Dynamic Random Access Memory (DRAM) die also has one or more custom logic, controller, or processor die. The custom die(s) of the stack include direct channel interfaces that allow direct access to memory regions on one or more DRAMs in the stack. The direct channels are time-division multiplexed such that each DRAM die is associated with a time slot on a direct channel. The custom die configures a first DRAM die to read a block of data and transmit it via the direct channel using a time slot that is assigned to a second DRAM die. The custom die also configures the second memory device to receive the first block of data in its assigned time slot and write the block of data.

    BLOCK COPY
    8.
    发明申请

    公开(公告)号:US20250028467A1

    公开(公告)日:2025-01-23

    申请号:US18794915

    申请日:2024-08-05

    Applicant: Rambus Inc.

    Abstract: An interconnected stack of one or more Dynamic Random Access Memory (DRAM) die also has one or more custom logic, controller, or processor die. The custom die(s) of the stack include direct channel interfaces that allow direct access to memory regions on one or more DRAMs in the stack. The direct channels are time-division multiplexed such that each DRAM die is associated with a time slot on a direct channel. The custom die configures a first DRAM die to read a block of data and transmit it via the direct channel using a time slot that is assigned to a second DRAM die. The custom die also configures the second memory device to receive the first block of data in its assigned time slot and write the block of data.

    MEMORY DEVICE FLUSH BUFFER OPERATIONS
    9.
    发明公开

    公开(公告)号:US20240311301A1

    公开(公告)日:2024-09-19

    申请号:US18598142

    申请日:2024-03-07

    Applicant: Rambus Inc.

    CPC classification number: G06F12/0804

    Abstract: A dynamic random access memory (DRAM) device includes functions configured to aid with operating the DRAM device as part of data caching functions. In response to some write and/or read access commands, the DRAM device is configured to copy a cache line (e.g., dirty cache line) from the main DRAM memory array, place it in a flush buffer, and replace the copied cache line in the main DRAM memory array with a new (e.g., different) cache line of data. In response to conditions and/or events (e.g., explicit command, refresh, write-to-read command sequence, unused data bus bandwidth, full flush buffer, etc.) the DRAM device transmits the cache line from the flush buffer to the controller. The controller may then transmit the cache line to other cache levels.

    STACKED DEVICE COMMUNICATION
    10.
    发明申请

    公开(公告)号:US20220229601A1

    公开(公告)日:2022-07-21

    申请号:US17576529

    申请日:2022-01-14

    Applicant: Rambus Inc.

    Abstract: An interconnected stack of one or more Dynamic Random Access Memory (DRAM) die has a base logic die and one or more custom logic or processor die. The processor logic die snoops commands sent to and through the stack. In particular, the processor logic die may snoop mode setting commands (e.g., mode register set—MRS commands). At least one mode setting command that is ignored by the DRAM in the stack is used to communicate a command to the processor logic die. In response the processor logic die may prevent commands, addresses, and data from reaching the DRAM die(s). This enables the processor logic die to send commands/addresses and communicate data with the DRAM die(s). While being able to send commands/addresses and communicate data with the DRAM die(s), the processor logic die may execute software using the DRAM die(s) for program and/or data storage and retrieval.

Patent Agency Ranking