Write Endurance Management Techniques in the Logic Layer of a Stacked Memory
    2.
    发明申请
    Write Endurance Management Techniques in the Logic Layer of a Stacked Memory 有权
    在堆叠存储器的逻辑层中写入耐力管理技术

    公开(公告)号:US20140181457A1

    公开(公告)日:2014-06-26

    申请号:US13725305

    申请日:2012-12-21

    CPC classification number: G06F12/10 G06F11/1666 G06F11/2094

    Abstract: A system, method, and memory device embodying some aspects of the present invention for remapping external memory addresses and internal memory locations in stacked memory are provided. The stacked memory includes one or more memory layers configured to store data. The stacked memory also includes a logic layer connected to the memory layer. The logic layer has an Input/Output (I/O) port configured to receive read and write commands from external devices, a memory map configured to maintain an association between external memory addresses and internal memory locations, and a controller coupled to the I/O port, memory map, and memory layers, configured to store data received from external devices to internal memory locations.

    Abstract translation: 提供体现本发明的一些方面的用于重新映射外部存储器地址和堆叠存储器中的内部存储器位置的系统,方法和存储器件。 堆叠的存储器包括被配置为存储数据的一个或多个存储器层。 堆叠的存储器还包括连接到存储器层的逻辑层。 逻辑层具有被配置为从外部设备接收读取和写入命令的输入/输出(I / O)端口,被配置为保持外部存储器地址和内部存储器位置之间的关联的存储器映射以及耦合到I / O端口,内存映射和内存层,配置为将从外部设备接收的数据存储到内部存储器位置。

    ADAPTIVE CACHE RECONFIGURATION VIA CLUSTERING

    公开(公告)号:US20200293445A1

    公开(公告)日:2020-09-17

    申请号:US16355168

    申请日:2019-03-15

    Abstract: A method of dynamic cache configuration includes determining, for a first clustering configuration, whether a current cache miss rate exceeds a miss rate threshold. The first clustering configuration includes a plurality of graphics processing unit (GPU) compute units clustered into a first plurality of compute unit clusters. The method further includes clustering, based on the current cache miss rate exceeding the miss rate threshold, the plurality of GPU compute units into a second clustering configuration having a second plurality of compute unit clusters fewer than the first plurality of compute unit clusters.

    Processor with Host and Slave Operating Modes Stacked with Memory
    5.
    发明申请
    Processor with Host and Slave Operating Modes Stacked with Memory 审中-公开
    具有主机和从机操作模式的处理器与内存堆叠

    公开(公告)号:US20140181453A1

    公开(公告)日:2014-06-26

    申请号:US13721395

    申请日:2012-12-20

    CPC classification number: G11C5/06 G06F12/02 G06F12/10 G06F13/1694 G11C7/1006

    Abstract: A system, method, and computer program product are provided for a memory device system. One or more memory dies and at least one logic die are disposed in a package and communicatively coupled. The logic die comprises a processing device configurable to manage virtual memory and operate in an operating mode. The operating mode is selected from a set of operating modes comprising a slave operating mode and a host operating mode.

    Abstract translation: 为存储器件系统提供了一种系统,方法和计算机程序产品。 一个或多个存储器管芯和至少一个逻辑管芯设置在封装中并且通信耦合。 逻辑管芯包括可配置为管理虚拟存储器并以操作模式操作的处理设备。 从包括从动操作模式和主机操作模式的一组操作模式中选择操作模式。

    DIRTY CACHELINE DUPLICATION
    6.
    发明申请

    公开(公告)号:US20140173379A1

    公开(公告)日:2014-06-19

    申请号:US13720536

    申请日:2012-12-19

    CPC classification number: G06F11/1064 G06F12/0893

    Abstract: A method of managing memory includes installing a first cacheline at a first location in a cache memory and receiving a write request. In response to the write request, the first cacheline is modified in accordance with the write request and marked as dirty. Also in response to the write request, a second cacheline is installed that duplicates the first cacheline, as modified in accordance with the write request, at a second location in the cache memory.

    Abstract translation: 管理存储器的方法包括在高速缓冲存储器中的第一位置安装第一高速缓存线并接收写入请求。 响应于写入请求,第一个缓存线根据写入请求进行修改并标记为脏。 还响应于写入请求,安装第二高速缓存线,该第二高速缓存线在高速缓冲存储器的第二位置处复制根据写入请求修改的第一高速缓存线。

    REDUCING CACHE FOOTPRINT IN CACHE COHERENCE DIRECTORY

    公开(公告)号:US20190163632A1

    公开(公告)日:2019-05-30

    申请号:US15825880

    申请日:2017-11-29

    Abstract: A method includes monitoring, at a cache coherence directory, states of cachelines stored in a cache hierarchy of a data processing system using a plurality of entries of the cache coherence directory. Each entry of the cache coherence directory is associated with a corresponding cache page of a plurality of cache pages, and each cache page representing a corresponding set of contiguous cachelines. The method further includes selectively evicting cachelines from a first cache of the cache hierarchy based on cacheline utilization densities of cache pages represented by the corresponding entries of the plurality of entries of the cache coherence directory.

    EXPANDABLE BUFFER FOR MEMORY TRANSACTIONS
    9.
    发明申请

    公开(公告)号:US20190163394A1

    公开(公告)日:2019-05-30

    申请号:US15824539

    申请日:2017-11-28

    Abstract: A processing system employs an expandable memory buffer that supports enlarging the memory buffer when the processing system generates a large number of long latency memory transactions. The hybrid structure of the memory buffer allows a memory controller of the processing system to store a larger number of memory transactions while still maintaining adequate transaction throughput and also ensuring a relatively small buffer footprint and power consumption. Further, the hybrid structure allows different portions of the buffer to be placed on separate integrated circuit dies, which in turn allows the memory controller to be used in a wide variety of integrated circuit configurations, including configurations that use only one portion of the memory buffer.

    PREEMPTIVE CACHE MANAGEMENT POLICIES FOR PROCESSING UNITS

    公开(公告)号:US20180285264A1

    公开(公告)日:2018-10-04

    申请号:US15475435

    申请日:2017-03-31

    CPC classification number: G06F12/0806 G06F2212/621

    Abstract: A processing system includes at least one central processing unit (CPU) core, at least one graphics processing unit (GPU) core, a main memory, and a coherence directory for maintaining cache coherence. The at least one CPU core receives a CPU cache flush command to flush cache lines stored in cache memory of the at least one CPU core prior to launching a GPU kernel. The coherence directory transfers data associated with a memory access request by the at least one GPU core from the main memory without issuing coherence probes to caches of the at least one CPU core.

Patent Agency Ranking