-
公开(公告)号:US09583182B1
公开(公告)日:2017-02-28
申请号:US15077424
申请日:2016-03-22
Applicant: Intel Corporation
Inventor: Christopher B. Wilkerson , Alaa R. Alameldeen , Zhe Wang , Zeshan A. Chishti
CPC classification number: G06F12/0804 , G06F12/0292 , G06F12/0868 , G06F12/0897 , G06F12/1009 , G06F12/1027 , G06F12/12 , G06F2212/1021 , G06F2212/502 , G06F2212/608 , G06F2212/651 , G11C8/00 , G11C11/56 , G11C16/08
Abstract: A multi-level memory management circuit can remap data between near and far memory. In one embodiment, a register array stores near memory addresses and far memory addresses mapped to the near memory addresses. The number of entries in the register array is less than the number of pages in near memory. Remapping logic determines that a far memory address of the requested data is absent from the register array and selects an available near memory address from the register array. Remapping logic also initiates writing of the requested data at the far memory address to the selected near memory address. Remapping logic further writes the far memory address to an entry of the register array corresponding to the selected near memory address.
Abstract translation: 多级存储器管理电路可重新映射近端和远端存储器之间的数据。 在一个实施例中,寄存器阵列存储映射到近存储器地址的近地址和远的存储器地址。 寄存器数组中的条目数量小于近内存中的页数。 重映射逻辑确定所请求数据的远存储器地址不存在于寄存器阵列中,并从寄存器阵列中选择可用的近地址。 重映射逻辑还启动在远存储器地址处将所请求的数据写入所选择的近地址。 重映射逻辑进一步将远存储器地址写入对应于所选择的近似存储器地址的寄存器阵列的条目。
-
公开(公告)号:US20150089126A1
公开(公告)日:2015-03-26
申请号:US14036673
申请日:2013-09-25
Applicant: Intel Corporation
Inventor: Sreenivas Subramoney , Jayesh Gaur , Alaa R. Alameldeen
IPC: G06F12/08
CPC classification number: G06F12/126 , G06F12/0895
Abstract: In an embodiment, a processor includes a cache data array including a plurality of physical ways, each physical way to store a baseline way and a victim way; a cache tag array including a plurality of tag groups, each tag group associated with a particular physical way and including a first tag associated with the baseline way stored in the particular physical way, and a second tag associated with the victim way stored in the particular physical way; and cache control logic to: select a first baseline way based on a replacement policy, select a first victim way based on an available capacity of a first physical way including the first victim way, and move a first data element from the first baseline way to the first victim way. Other embodiments are described and claimed.
Abstract translation: 在一个实施例中,处理器包括高速缓存数据阵列,其包括多个物理方式,每种物理方式来存储基线方式和受害方式; 包括多个标签组的缓存标签阵列,与特定物理方式相关联的每个标签组,并且包括与以特定物理方式存储的基线方式相关联的第一标签,以及与存储在特定物理方式中的受害方式相关联的第二标签 物理方式 以及高速缓存控制逻辑,以:基于替换策略选择第一基线方式,基于包括所述第一受害者方式的第一物理方式的可用容量选择第一受害者方式,并将第一数据元素从所述第一基线方式移动到 第一个受害者的方式。 描述和要求保护其他实施例。
-
公开(公告)号:US11526448B2
公开(公告)日:2022-12-13
申请号:US16586251
申请日:2019-09-27
Applicant: Intel Corporation
Inventor: Zhe Wang , Alaa R. Alameldeen , Yi Zou , Gordon King
IPC: G06F12/0811 , G06F12/0873 , G06F12/02 , G06F13/16 , G06F12/0897
Abstract: An apparatus is described. The apparatus includes a memory controller to interface with a multi-level memory, where, an upper level of the multi-level memory is to act as a cache for a lower level of the multi-level memory. The memory controller has circuitry to determine: i) an original address of a slot in the upper level of memory from an address of a memory request in a direct mapped fashion; ii) a miss in the cache for the request because the slot is pinned with data from another address that competes with the address; iii) a partner slot of the slot in the cache in response to the miss; iv) whether there is a hit or miss in the partner slot in the cache for the request.
-
公开(公告)号:US11188467B2
公开(公告)日:2021-11-30
申请号:US15717939
申请日:2017-09-28
Applicant: Intel Corporation
Inventor: Israel Diamand , Alaa R. Alameldeen , Sreenivas Subramoney , Supratik Majumder , Srinivas Santosh Kumar Madugula , Jayesh Gaur , Zvika Greenfield , Anant V. Nori
IPC: G06F12/00 , G06F12/0846 , G06F12/0811 , G06F12/128 , G06F12/121 , G06F12/0886 , G06F12/08
Abstract: A method is described. The method includes receiving a read or write request for a cache line. The method includes directing the request to a set of logical super lines based on the cache line's system memory address. The method includes associating the request with a cache line of the set of logical super lines. The method includes, if the request is a write request: compressing the cache line to form a compressed cache line, breaking the cache line down into smaller data units and storing the smaller data units into a memory side cache. The method includes, if the request is a read request: reading smaller data units of the compressed cache line from the memory side cache and decompressing the cache line.
-
公开(公告)号:US10860244B2
公开(公告)日:2020-12-08
申请号:US15854357
申请日:2017-12-26
Applicant: Intel Corporation
Inventor: Binh Pham , Christopher B. Wilkerson , Alaa R. Alameldeen , Zeshan A. Chishti , Zhe Wang
IPC: G06F3/06 , G06F12/0862 , G06F12/0871 , G06F12/1027 , G06F12/0897 , G06F12/1045 , G06F12/128 , G06F12/14 , G06F12/123
Abstract: An apparatus is described that includes a memory controller to couple to a multi-level memory characterized by a faster higher level and a slower lower level. The memory controller having early demotion logic circuitry to demote a page from the higher level to the lower level without system software having to instruct the memory controller to demote the page and before the system software promotes another page from the lower level to the higher level.
-
公开(公告)号:US10691602B2
公开(公告)日:2020-06-23
申请号:US16024666
申请日:2018-06-29
Applicant: Intel Corporation
Inventor: Gino Chacon , Alaa R. Alameldeen
IPC: G06F12/08 , G06F12/084 , G06F12/0815
Abstract: To reduce overhead for cache coherence for shared cache in multi-processor systems, adaptive granularity allows tracking shared data at a coarse granularity and unshared data at fine granularity. Processes for adaptive granularity select how large of an entry is required to track the coherence of a block based on its state. Shared blocks are tracked in coarse-grained region entries that include a sharer tracking bit vector and a bit vector that indicates which blocks are likely to be present in the system, but do not identify the owner of the block. Modified/unshared data is tracked in fine-grained entries that permit ownership tracking and exact location and invalidation of cache. Large caches where the majority of blocks are shared and not modified create less overhead by being tracked in the less costly coarse-grained region entries.
-
27.
公开(公告)号:US10452312B2
公开(公告)日:2019-10-22
申请号:US15396204
申请日:2016-12-30
Applicant: INTEL CORPORATION
Inventor: Zhe Wang , Zeshan A. Chishti , Muthukumar P. Swaminathan , Alaa R. Alameldeen , Kunal A. Khochare , Jason A. Gayman
Abstract: Provided are an apparatus, system and method to determine whether to use a low or high read voltage. First level indications of write addresses, for locations in the non-volatile memory to which write requests have been directed, are included in a first level data structure. For a write address of the write addresses having a first level indication in the first level data structure, the first level indication of the write address is removed from the first level data structure and a second level indication for the write address is added to a second level data structure to free space in the first level data structure to indicate a further write address. A first voltage level is used to read data from read addresses mapping to one of the first and second level indications in the first and the second level data structures, respectively. A second voltage level is used to read data from read addresses that do not map to one of the first and the second level indications the first and second level data structures, respectively.
-
公开(公告)号:US10417135B2
公开(公告)日:2019-09-17
申请号:US15718071
申请日:2017-09-28
Applicant: Intel Corporation
Inventor: Zhe Wang , Zeshan A. Chishti , Alaa R. Alameldeen , Rajat Agarwal
IPC: G06F12/08 , G06F12/12 , G06F12/10 , G06F12/0877 , G06F12/0862 , G06F12/128 , G06F12/0888 , G06F12/1009 , G06F12/0817
Abstract: Systems, apparatuses and methods may provide for technology to maintain a prediction table that tracks missed page addresses with respect to a first memory. If an access request does not correspond to any valid page addresses in the prediction table, the access request may be sent to the first memory. If the access request corresponds to a valid page address in the prediction table, the access request may be sent to the first memory and a second memory in parallel, wherein the first memory is associated with a shorter access time than the second memory.
-
29.
公开(公告)号:US10048868B2
公开(公告)日:2018-08-14
申请号:US15279647
申请日:2016-09-29
Applicant: Intel Corporation
Inventor: Alaa R. Alameldeen , Glenn J. Hinton , Blaise Fanning , James J. Greensky
IPC: G06F12/00 , G06F3/06 , G06F12/0873 , G06F12/12
Abstract: Systems, apparatuses and methods may provide for identifying a first block and a second block, wherein the first block includes a first plurality of cache lines, the second block includes a second plurality of cache lines, and the second block resides in a memory-side cache. Additionally, each cache line in the first plurality of cache lines may be compressed with a corresponding cache line in the second plurality of cache lines to obtain a compressed block that includes a third plurality of cache lines. In one example, the second block is replaced in the memory-side cache with the compressed block if the compressed block satisfies a size condition.
-
公开(公告)号:US20180088822A1
公开(公告)日:2018-03-29
申请号:US15279647
申请日:2016-09-29
Applicant: Intel Corporation
Inventor: Alaa R. Alameldeen , Glenn J. Hinton , Blaise Fanning , James J. Greensky
IPC: G06F3/06 , G06F12/0873 , G06F12/12
CPC classification number: G06F3/0608 , G06F3/064 , G06F3/0661 , G06F3/0673 , G06F12/0873 , G06F12/12 , G06F12/126 , G06F12/128 , G06F2212/1044 , G06F2212/281 , G06F2212/3042 , G06F2212/305 , G06F2212/401 , G06F2212/69 , G06F2212/70
Abstract: Systems, apparatuses and methods may provide for identifying a first block and a second block, wherein the first block includes a first plurality of cache lines, the second block includes a second plurality of cache lines, and the second block resides in a memory-side cache. Additionally, each cache line in the first plurality of cache lines may be compressed with a corresponding cache line in the second plurality of cache lines to obtain a compressed block that includes a third plurality of cache lines. In one example, the second block is replaced in the memory-side cache with the compressed block if the compressed block satisfies a size condition.
-
-
-
-
-
-
-
-
-