Memory scheduling for RAM caches based on tag caching
    1.
    发明授权
    Memory scheduling for RAM caches based on tag caching 有权
    基于标记缓存的RAM缓存的内存调度

    公开(公告)号:US09026731B2

    公开(公告)日:2015-05-05

    申请号:US13725024

    申请日:2012-12-21

    Abstract: A system, method and computer program product to store tag blocks in a tag buffer in order to provide early row-buffer miss detection, early page closing, and reductions in tag block transfers. A system comprises a tag buffer, a request buffer, and a memory controller. The request buffer stores a memory request having an associated tag. The memory controller compares the associated tag to a plurality of tags stored in the tag buffer and issues the memory request stored in the request buffer to either a memory cache or a main memory based on the comparison.

    Abstract translation: 一种用于将标签块存储在标签缓冲器中的系统,方法和计算机程序产品,以便提供早期行缓冲器未命中检测,早期关闭和减少标签块传输。 系统包括标签缓冲器,请求缓冲器和存储器控制器。 请求缓冲器存储具有关联标签的存储器请求。 存储器控制器将相关联的标签与存储在标签缓冲器中的多个标签进行比较,并且基于该比较将存储在请求缓冲器中的存储器请求发布到存储器高速缓存或主存储器。

    Dynamically Configuring Regions of a Main Memory in a Write-Back Mode or a Write-Through Mode
    2.
    发明申请
    Dynamically Configuring Regions of a Main Memory in a Write-Back Mode or a Write-Through Mode 有权
    以写回模式或直写模式动态配置主内存区域

    公开(公告)号:US20140143505A1

    公开(公告)日:2014-05-22

    申请号:US13736063

    申请日:2013-01-07

    CPC classification number: G06F12/0802 G06F12/0804 G06F12/0862 G06F12/0888

    Abstract: The described embodiments include a main memory and a cache memory (or “cache”) with a cache controller that includes a mode-setting mechanism. In some embodiments, the mode-setting mechanism is configured to dynamically determine an access pattern for the main memory. Based on the determined access pattern, the mode-setting mechanism configures at least one region of the main memory in a write-back mode and configures other regions of the main memory in a write-through mode. In these embodiments, when performing a write operation in the cache memory, the cache controller determines whether a region in the main memory where the cache block is from is configured in the write-back mode or the write-through mode and then performs a corresponding write operation in the cache memory

    Abstract translation: 所描述的实施例包括具有包括模式设置机制的高速缓存控制器的主存储器和高速缓冲存储器(或“高速缓存”)。 在一些实施例中,模式设置机制被配置为动态地确定主存储器的访问模式。 基于确定的访问模式,模式设置机制以回写模式配置主存储器的至少一个区域,并以直通模式配置主存储器的其他区域。 在这些实施例中,当在高速缓冲存储器中执行写入操作时,高速缓存控制器确定高速缓存块所处的主存储器中的区域是否配置在回写模式或直写模式中,然后执行相应的 在高速缓存中写入操作

    Dynamically configuring regions of a main memory in a write-back mode or a write-through mode
    3.
    发明授权
    Dynamically configuring regions of a main memory in a write-back mode or a write-through mode 有权
    以写回模式或直写模式动态配置主存储器的区域

    公开(公告)号:US09552294B2

    公开(公告)日:2017-01-24

    申请号:US13736063

    申请日:2013-01-07

    CPC classification number: G06F12/0802 G06F12/0804 G06F12/0862 G06F12/0888

    Abstract: The described embodiments include a main memory and a cache memory (or “cache”) with a cache controller that includes a mode-setting mechanism. In some embodiments, the mode-setting mechanism is configured to dynamically determine an access pattern for the main memory. Based on the determined access pattern, the mode-setting mechanism configures at least one region of the main memory in a write-back mode and configures other regions of the main memory in a write-through mode. In these embodiments, when performing a write operation in the cache memory, the cache controller determines whether a region in the main memory where the cache block is from is configured in the write-back mode or the write-through mode and then performs a corresponding write operation in the cache memory.

    Abstract translation: 所描述的实施例包括具有包括模式设置机制的高速缓存控制器的主存储器和高速缓冲存储器(或“高速缓存”)。 在一些实施例中,模式设置机制被配置为动态地确定主存储器的访问模式。 基于确定的访问模式,模式设置机制以回写模式配置主存储器的至少一个区域,并以直通模式配置主存储器的其他区域。 在这些实施例中,当在高速缓冲存储器中执行写入操作时,高速缓存控制器确定高速缓存块所处的主存储器中的区域是否配置在回写模式或直写模式中,然后执行相应的 在高速缓存中写入操作。

    Predicting outcomes for memory requests in a cache memory
    4.
    发明授权
    Predicting outcomes for memory requests in a cache memory 有权
    预测缓存中内存请求的结果

    公开(公告)号:US09235514B2

    公开(公告)日:2016-01-12

    申请号:US13736254

    申请日:2013-01-08

    CPC classification number: G06F12/0802 G06F12/0804 G06F12/0862 G06F12/0888

    Abstract: The described embodiments include a cache controller with a prediction mechanism in a cache. In the described embodiments, the prediction mechanism is configured to perform a lookup in each table in a hierarchy of lookup tables in parallel to determine if a memory request is predicted to be a hit in the cache, each table in the hierarchy comprising predictions whether memory requests to corresponding regions of a main memory will hit the cache, the corresponding regions of the main memory being smaller for tables lower in the hierarchy.

    Abstract translation: 所描述的实施例包括在高速缓存中具有预测机制的高速缓存控制器。 在所描述的实施例中,预测机制被配置为并行地在查找表的层次中的每个表中执行查找,以确定存储器请求是否被预测为高速缓存中的命中,层级中的每个表包括是否存储 对主存储器的对应区域的请求将到达高速缓存,主存储器的对应区域对于层级中较低的表来说较小。

    DIRTY CACHELINE DUPLICATION
    5.
    发明申请

    公开(公告)号:US20140173379A1

    公开(公告)日:2014-06-19

    申请号:US13720536

    申请日:2012-12-19

    CPC classification number: G06F11/1064 G06F12/0893

    Abstract: A method of managing memory includes installing a first cacheline at a first location in a cache memory and receiving a write request. In response to the write request, the first cacheline is modified in accordance with the write request and marked as dirty. Also in response to the write request, a second cacheline is installed that duplicates the first cacheline, as modified in accordance with the write request, at a second location in the cache memory.

    Abstract translation: 管理存储器的方法包括在高速缓冲存储器中的第一位置安装第一高速缓存线并接收写入请求。 响应于写入请求,第一个缓存线根据写入请求进行修改并标记为脏。 还响应于写入请求,安装第二高速缓存线,该第二高速缓存线在高速缓冲存储器的第二位置处复制根据写入请求修改的第一高速缓存线。

    Dirty cacheline duplication
    6.
    发明授权
    Dirty cacheline duplication 有权
    脏的缓存线重复

    公开(公告)号:US09229803B2

    公开(公告)日:2016-01-05

    申请号:US13720536

    申请日:2012-12-19

    CPC classification number: G06F11/1064 G06F12/0893

    Abstract: A method of managing memory includes installing a first cacheline at a first location in a cache memory and receiving a write request. In response to the write request, the first cacheline is modified in accordance with the write request and marked as dirty. Also in response to the write request, a second cacheline is installed that duplicates the first cacheline, as modified in accordance with the write request, at a second location in the cache memory.

    Abstract translation: 管理存储器的方法包括在高速缓冲存储器中的第一位置安装第一高速缓存线并接收写入请求。 响应于写入请求,第一个缓存线根据写入请求进行修改并标记为脏。 还响应于写入请求,安装第二高速缓存线,该第二高速缓存线在高速缓冲存储器的第二位置处复制根据写入请求修改的第一高速缓存线。

    Memory Scheduling for RAM Caches Based on Tag Caching
    7.
    发明申请
    Memory Scheduling for RAM Caches Based on Tag Caching 有权
    基于标记缓存的RAM缓存的内存调度

    公开(公告)号:US20140181384A1

    公开(公告)日:2014-06-26

    申请号:US13725024

    申请日:2012-12-21

    Abstract: A system, method and computer program product to store tag blocks in a tag buffer in order to provide early row-buffer miss detection, early page closing, and reductions in tag block transfers. A system comprises a tag buffer, a request buffer, and a memory controller. The request buffer stores a memory request having an associated tag. The memory controller compares the associated tag to a plurality of tags stored in the tag buffer and issues the memory request stored in the request buffer to either a memory cache or a main memory based on the comparison.

    Abstract translation: 一种用于将标签块存储在标签缓冲器中的系统,方法和计算机程序产品,以便提供早期行缓冲器未命中检测,早期关闭和减少标签块传输。 系统包括标签缓冲器,请求缓冲器和存储器控制器。 请求缓冲器存储具有关联标签的存储器请求。 存储器控制器将相关联的标签与存储在标签缓冲器中的多个标签进行比较,并且基于该比较将存储在请求缓冲器中的存储器请求发布到存储器高速缓存或主存储器。

    Predicting Outcomes for Memory Requests in a Cache Memory
    8.
    发明申请
    Predicting Outcomes for Memory Requests in a Cache Memory 有权
    预测缓存中内存请求的结果

    公开(公告)号:US20140143502A1

    公开(公告)日:2014-05-22

    申请号:US13736254

    申请日:2013-01-08

    CPC classification number: G06F12/0802 G06F12/0804 G06F12/0862 G06F12/0888

    Abstract: The described embodiments include a cache controller with a prediction mechanism in a cache. In the described embodiments, the prediction mechanism is configured to perform a lookup in each table in a hierarchy of lookup tables in parallel to determine if a memory request is predicted to be a hit in the cache, each table in the hierarchy comprising predictions whether memory requests to corresponding regions of a main memory will hit the cache, the corresponding regions of the main memory being smaller for tables lower in the hierarchy.

    Abstract translation: 所描述的实施例包括在高速缓存中具有预测机制的高速缓存控制器。 在所描述的实施例中,预测机制被配置为并行地在查找表的层次中的每个表中执行查找,以确定存储器请求是否被预测为高速缓存中的命中,层级中的每个表包括是否存储 对主存储器的对应区域的请求将到达高速缓存,主存储器的对应区域对于层级中较低的表来说较小。

Patent Agency Ranking