HBM RAS CACHE ARCHITECTURE
    1.
    发明申请

    公开(公告)号:US20250077370A1

    公开(公告)日:2025-03-06

    申请号:US18953042

    申请日:2024-11-19

    Abstract: According to one general aspect, an apparatus may include a plurality of stacked integrated circuit dies that include a memory cell die and a logic die. The memory cell die may be configured to store data at a memory address. The logic die may include an interface to the stacked integrated circuit dies and configured to communicate memory accesses between the memory cell die and at least one external device. The logic die may include a reliability circuit configured to ameliorate data errors within the memory cell die. The reliability circuit may include a spare memory configured to store data, and an address table configured to map a memory address associated with an error to the spare memory. The reliability circuit may be configured to determine if the memory access is associated with an error, and if so completing the memory access with the spare memory.

    BANDWIDTH BOOSTED STACKED MEMORY
    3.
    发明申请

    公开(公告)号:US20230087747A1

    公开(公告)日:2023-03-23

    申请号:US18070328

    申请日:2022-11-28

    Abstract: A high bandwidth memory system. In some embodiments, the system includes: a memory stack having a plurality of memory dies and eight 128-bit channels; and a logic die, the memory dies being stacked on, and connected to, the logic die; wherein the logic die may be configured to operate a first channel of the 128-bit channels in: a first mode, in which a first 64 bits operate in pseudo-channel mode, and a second 64 bits operate as two 32-bit fine-grain channels, or a second mode, in which the first 64 bits operate as two 32-bit fine-grain channels, and the second 64 bits operate as two 32-bit fine-grain channels.

    HIGH BANDWIDTH MEMORY SYSTEM
    4.
    发明申请

    公开(公告)号:US20220414030A1

    公开(公告)日:2022-12-29

    申请号:US17901846

    申请日:2022-09-01

    Abstract: A high-bandwidth memory (HBM) includes a memory and a controller. The controller receives a data write request from a processor external to the HBM and the controller stores an entry in the memory indicating at least one address of data of the data write request and generates an indication that a data bus is available for an operation during a cycle time of the data write request based on the data write request comprising sparse data or data-value similarity. Sparse data includes a predetermined percentage of data values equal to zero, and data-value similarity includes a predetermined amount of spatial value locality of the data values. The predetermined percentage of data values equal to zero of sparse data and the predetermined amount of spatial value locality of the special-value pattern are both based on a predetermined data granularity.

    TRANSACTION-BASED HYBRID MEMORY MODULE
    6.
    发明申请
    TRANSACTION-BASED HYBRID MEMORY MODULE 审中-公开
    基于事务的混合存储器模块

    公开(公告)号:US20170060434A1

    公开(公告)日:2017-03-02

    申请号:US14947145

    申请日:2015-11-20

    CPC classification number: G06F3/0679 G06F3/061 G06F3/0656 G06F13/1668

    Abstract: A hybrid memory module includes a dynamic random access memory (DRAM) cache, a flash storage, and a memory controller. The DRAM cache includes one or more DRAM devices and a DRAM controller, and the flash storage includes one or more flash devices and a flash controller. The memory controller interfaces with the DRAM controller and the flash controller.

    Abstract translation: 混合存储器模块包括动态随机存取存储器(DRAM)高速缓存,闪存存储器和存储器控制器。 DRAM高速缓存包括一个或多个DRAM设备和DRAM控制器,并且闪存存储器包括一个或多个闪存设备和闪存控制器。 存储器控制器与DRAM控制器和闪存控制器接口。

    COORDINATED IN-MODULE RAS FEATURES FOR SYNCHRONOUS DDR COMPATIBLE MEMORY

    公开(公告)号:US20220229551A1

    公开(公告)日:2022-07-21

    申请号:US17713228

    申请日:2022-04-04

    Abstract: A memory module includes a memory array, an interface and a controller. The memory array includes an array of memory cells and is configured as a dual in-line memory module (DIMM). The DIMM includes a plurality of connections that have been repurposed from a standard DIMM pin out configuration to interface operational status of the memory device to a host device. The interface is coupled to the memory array and the plurality of connections of the DIMM to interface the memory array to the host device. The controller is coupled to the memory array and the interface and controls at least one of a refresh operation of the memory array, control an error-correction operation of the memory array, control a memory scrubbing operation of the memory array, and control a wear-level control operation of the array, and the controller to interface with the host device.

    HIGH BANDWIDTH MEMORY SYSTEM
    8.
    发明申请

    公开(公告)号:US20200349093A1

    公开(公告)日:2020-11-05

    申请号:US16569657

    申请日:2019-09-12

    Abstract: A high-bandwidth memory (HBM) includes a memory and a controller. The controller receives a data write request from a processor external to the HBM and the controller stores an entry in the memory indicating at least one address of data of the data write request and generates an indication that a data bus is available for an operation during a cycle time of the data write request based on the data write request comprising sparse data or data-value similarity. Sparse data includes a predetermined percentage of data values equal to zero, and data-value similarity includes a predetermined amount of spatial value locality of the data values. The predetermined percentage of data values equal to zero of sparse data and the predetermined amount of spatial value locality of the special-value pattern are both based on a predetermined data granularity.

    DATAFLOW ACCELERATOR ARCHITECTURE FOR GENERAL MATRIX-MATRIX MULTIPLICATION AND TENSOR COMPUTATION IN DEEP LEARNING

    公开(公告)号:US20200183837A1

    公开(公告)日:2020-06-11

    申请号:US16388863

    申请日:2019-04-18

    Abstract: A tensor computation dataflow accelerator semiconductor circuit is disclosed. The data flow accelerator includes a DRAM bank and a peripheral array of multiply-and-add units disposed adjacent to the DRAM bank. The peripheral array of multiply-and-add units are configured to form a pipelined dataflow chain in which partial output data from one multiply-and-add unit from among the array of multiply-and-add units is fed into another multiply-and-add unit from among the array of multiply-and-add units for data accumulation. Near-DRAM-processing dataflow (NDP-DF) accelerator unit dies may be stacked atop a base die. The base die may be disposed on a passive silicon interposer adjacent to a processor or a controller. The NDP-DF accelerator units may process partial matrix output data in parallel. The partial matrix output data may be propagated in a forward or backward direction. The tensor computation dataflow accelerator may perform a partial matrix transposition.

    DUAL ROW-COLUMN MAJOR DRAM
    10.
    发明申请

    公开(公告)号:US20190043553A1

    公开(公告)日:2019-02-07

    申请号:US15713587

    申请日:2017-09-22

    Abstract: A memory device includes an array of 2T1C DRAM cells and a memory controller. The DRAM cells are arranged as a plurality of rows and columns of DRAM cells. The memory controller is internal to the memory device and is coupled to the array of DRAM cells. The memory controller is capable of receiving commands input to the memory device and is responsive to the received commands to control row-major access and column-major access to the array of DRAM cells. In one embodiment, each transistor of a memory cell includes a terminal directly coupled to a storage node of the capacitor. In another embodiment, a first transistor of a memory cell includes a terminal directly coupled to a storage node of the capacitor, and a second transistor of the 2T1C memory cell includes a gate terminal directly coupled to the storage node of the capacitor.

Patent Agency Ranking