Stacked device communication
    61.
    发明授权

    公开(公告)号:US11922066B2

    公开(公告)日:2024-03-05

    申请号:US17576529

    申请日:2022-01-14

    Applicant: Rambus Inc.

    Abstract: An interconnected stack of one or more Dynamic Random Access Memory (DRAM) die has a base logic die and one or more custom logic or processor die. The processor logic die snoops commands sent to and through the stack. In particular, the processor logic die may snoop mode setting commands (e.g., mode register set—MRS commands). At least one mode setting command that is ignored by the DRAM in the stack is used to communicate a command to the processor logic die. In response the processor logic die may prevent commands, addresses, and data from reaching the DRAM die(s). This enables the processor logic die to send commands/addresses and communicate data with the DRAM die(s). While being able to send commands/addresses and communicate data with the DRAM die(s), the processor logic die may execute software using the DRAM die(s) for program and/or data storage and retrieval.

    High-bandwidth neural network
    62.
    发明授权

    公开(公告)号:US11915136B1

    公开(公告)日:2024-02-27

    申请号:US17952852

    申请日:2022-09-26

    Applicant: Rambus Inc.

    Inventor: Steven C. Woo

    CPC classification number: G06N3/08 G06N3/04 G06N3/063

    Abstract: One or more neural network layers are implemented by respective sets of signed multiply-accumulate units that generate dual analog result signals indicative of positive and negative product accumulations, respectively. The two analog result signals and thus the positive and negative product accumulations are differentially combined to produce a merged analog output signal that constitutes the output of a neural node within the subject neural network layer.

    STACKED DEVICE SYSTEM
    68.
    发明公开

    公开(公告)号:US20240037055A1

    公开(公告)日:2024-02-01

    申请号:US18230375

    申请日:2023-08-04

    Applicant: Rambus Inc.

    Inventor: Steven C. WOO

    CPC classification number: G06F13/4027 H01L25/0652 G06N3/045

    Abstract: Multiple device stacks are interconnected in a ring topology. The inter-device stack communication may utilize a handshake protocol. This ring topology may include the host so that the host may initialize and load the device stacks with data and/or commands (e.g., software, algorithms, etc.). The inter-device stack interconnections may also be configured to include/remove the host and/or to implement varying numbers of separate ring topologies. By configuring the system with more than one ring topology, and assigning different problems to different rings, multiple, possibly unrelated, machine learning tasks may be performed in parallel by the device stack system.

    Direct digital sequence detection and equalization

    公开(公告)号:US11876652B2

    公开(公告)日:2024-01-16

    申请号:US17400823

    申请日:2021-08-12

    Applicant: Rambus Inc.

    CPC classification number: H04L25/4917 H04L25/03019 H04L25/03178

    Abstract: Methods and apparatuses for direct sequence detection can receive an input signal over a communication channel. Next, the input signal can be sampled based on a clock signal to obtain a sampled voltage. A set of reference voltages can be generated based on a main cursor, a set of pre-cursors, and a set of post-cursors associated with the communication channel. Each generated reference voltage in the set of reference voltages can correspond to a particular sequence of symbols. A sequence corresponding to the sampled voltage can be selected based on comparing the sampled voltage with the set of reference voltages.

    FLEXIBLE METADATA ALLOCATION AND CACHING
    70.
    发明公开

    公开(公告)号:US20240013819A1

    公开(公告)日:2024-01-11

    申请号:US18348716

    申请日:2023-07-07

    Applicant: Rambus Inc.

    CPC classification number: G11C7/1084 G11C7/1006

    Abstract: An apparatus and method for flexible metadata allocation and caching. In one embodiment of the method first and second requests are received from first and second applications, respectively, wherein the requests specify a reading of first and second data, respectively, from one or more memory devices. The circuit reads the first and second data in response to receiving the first and second requests. Receiving first and second metadata from the one or more memory devices in response to receiving the first and second requests. The first and second metadata correspond to the first and second data, respectively. The first and second data are equal in size, and the first and second metadata are unequal in size.

Patent Agency Ranking