Multi-level memory management
    21.
    发明授权
    Multi-level memory management 有权
    多级内存管理

    公开(公告)号:US09583182B1

    公开(公告)日:2017-02-28

    申请号:US15077424

    申请日:2016-03-22

    Abstract: A multi-level memory management circuit can remap data between near and far memory. In one embodiment, a register array stores near memory addresses and far memory addresses mapped to the near memory addresses. The number of entries in the register array is less than the number of pages in near memory. Remapping logic determines that a far memory address of the requested data is absent from the register array and selects an available near memory address from the register array. Remapping logic also initiates writing of the requested data at the far memory address to the selected near memory address. Remapping logic further writes the far memory address to an entry of the register array corresponding to the selected near memory address.

    Abstract translation: 多级存储器管理电路可重新映射近端和远端存储器之间的数据。 在一个实施例中,寄存器阵列存储映射到近存储器地址的近地址和远的存储器地址。 寄存器数组中的条目数量小于近内存中的页数。 重映射逻辑确定所请求数据的远存储器地址不存在于寄存器阵列中,并从寄存器阵列中选择可用的近地址。 重映射逻辑还启动在远存储器地址处将所请求的数据写入所选择的近地址。 重映射逻辑进一步将远存储器地址写入对应于所选择的近似存储器地址的寄存器阵列的条目。

    Data Compression In Processor Caches
    22.
    发明申请
    Data Compression In Processor Caches 有权
    处理器缓存中的数据压缩

    公开(公告)号:US20150089126A1

    公开(公告)日:2015-03-26

    申请号:US14036673

    申请日:2013-09-25

    CPC classification number: G06F12/126 G06F12/0895

    Abstract: In an embodiment, a processor includes a cache data array including a plurality of physical ways, each physical way to store a baseline way and a victim way; a cache tag array including a plurality of tag groups, each tag group associated with a particular physical way and including a first tag associated with the baseline way stored in the particular physical way, and a second tag associated with the victim way stored in the particular physical way; and cache control logic to: select a first baseline way based on a replacement policy, select a first victim way based on an available capacity of a first physical way including the first victim way, and move a first data element from the first baseline way to the first victim way. Other embodiments are described and claimed.

    Abstract translation: 在一个实施例中,处理器包括高速缓存数据阵列,其包括多个物理方式,每种物理方式来存储基线方式和受害方式; 包括多个标签组的缓存标签阵列,与特定物理方式相关联的每个标签组,并且包括与以特定物理方式存储的基线方式相关联的第一标签,以及与存储在特定物理方式中的受害方式相关联的第二标签 物理方式 以及高速缓存控制逻辑,以:基于替换策略选择第一基线方式,基于包括所述第一受害者方式的第一物理方式的可用容量选择第一受害者方式,并将第一数据元素从所述第一基线方式移动到 第一个受害者的方式。 描述和要求保护其他实施例。

    Adaptive granularity for reducing cache coherence overhead

    公开(公告)号:US10691602B2

    公开(公告)日:2020-06-23

    申请号:US16024666

    申请日:2018-06-29

    Abstract: To reduce overhead for cache coherence for shared cache in multi-processor systems, adaptive granularity allows tracking shared data at a coarse granularity and unshared data at fine granularity. Processes for adaptive granularity select how large of an entry is required to track the coherence of a block based on its state. Shared blocks are tracked in coarse-grained region entries that include a sharer tracking bit vector and a bit vector that indicates which blocks are likely to be present in the system, but do not identify the owner of the block. Modified/unshared data is tracked in fine-grained entries that permit ownership tracking and exact location and invalidation of cache. Large caches where the majority of blocks are shared and not modified create less overhead by being tracked in the less costly coarse-grained region entries.

    Apparatus, system, and method to determine a demarcation voltage to use to read a non-volatile memory

    公开(公告)号:US10452312B2

    公开(公告)日:2019-10-22

    申请号:US15396204

    申请日:2016-12-30

    Abstract: Provided are an apparatus, system and method to determine whether to use a low or high read voltage. First level indications of write addresses, for locations in the non-volatile memory to which write requests have been directed, are included in a first level data structure. For a write address of the write addresses having a first level indication in the first level data structure, the first level indication of the write address is removed from the first level data structure and a second level indication for the write address is added to a second level data structure to free space in the first level data structure to indicate a further write address. A first voltage level is used to read data from read addresses mapping to one of the first and second level indications in the first and the second level data structures, respectively. A second voltage level is used to read data from read addresses that do not map to one of the first and the second level indications the first and second level data structures, respectively.

Patent Agency Ranking