Variable distance bypass between tag array and data array pipelines in a cache
    22.
    发明授权
    Variable distance bypass between tag array and data array pipelines in a cache 有权
    缓存中标签阵列与数据阵列管道之间的可变距离旁路

    公开(公告)号:US09529720B2

    公开(公告)日:2016-12-27

    申请号:US13912809

    申请日:2013-06-07

    CPC classification number: G06F12/0855 G06F12/0844 G06F12/0846

    Abstract: The present application describes embodiments of techniques for picking a data array lookup request for execution in a data array pipeline a variable number of cycles behind a corresponding tag array lookup request that is concurrently executing in a tag array pipeline. Some embodiments of a method for picking the data array lookup request include picking the data array lookup request for execution in a data array pipeline of a cache concurrently with execution of a tag array lookup request in a tag array pipeline of the cache. The data array lookup request is picked for execution in response to resources of the data array pipeline becoming available after picking the tag array lookup request for execution. Some embodiments of the method may be implemented in a cache.

    Abstract translation: 本申请描述了用于在数据阵列流水线中选择用于执行数据阵列查找请求的技术的实施例,该数据阵列查找请求在标签阵列管线中同时执行的对应的标签数组查找请求后面的可变数量的循环。 用于选择数据阵列查找请求的方法的一些实施例包括在高速缓存的标签阵列管线中执行标签阵列查找请求的同时,在高速缓存的数据阵列流水线中选择用于执行的数据阵列查找请求。 选择数据数组查找请求以在执行标签数组查找请求之后响应于数据数组流水线变得可用的资源进行执行。 该方法的一些实施例可以在高速缓存中实现。

    Cache access arbitration for prefetch requests
    23.
    发明授权
    Cache access arbitration for prefetch requests 有权
    缓存访问仲裁预取请求

    公开(公告)号:US09223705B2

    公开(公告)日:2015-12-29

    申请号:US13854541

    申请日:2013-04-01

    Abstract: A processor employs a prefetch prediction module that predicts, for each prefetch request, whether the prefetch request is likely to be satisfied from (“hit”) the cache. The arbitration priority of prefetch requests that are predicted to hit the cache is reduced relative to demand requests or other prefetch requests that are predicted to miss in the cache. Accordingly, an arbiter for the cache is less likely to select prefetch requests that hit the cache, thereby improving processor throughput.

    Abstract translation: 处理器采用预取预测模块,其针对每个预取请求预测预取请求是否可能从缓存(“命中”)满足。 预测到达高速缓存的预取请求的仲裁优先级相对于预期在高速缓存中丢失的请求请求或其他预取请求而减少。 因此,缓存的仲裁器不太可能选择命中高速缓存的预取请求,从而提高处理器的吞吐量。

    Detecting and correcting hard errors in a memory array
    24.
    发明授权
    Detecting and correcting hard errors in a memory array 有权
    检测和纠正存储器阵列中的硬错误

    公开(公告)号:US09189326B2

    公开(公告)日:2015-11-17

    申请号:US14048830

    申请日:2013-10-08

    Abstract: Hard errors in the memory array can be detected and corrected in real-time using reusable entries in an error status buffer. Data may be rewritten to a portion of a memory array and a register in response to a first error in data read from the portion of the memory array. The rewritten data may then be written from the register to an entry of an error status buffer in response to the rewritten data read from the register differing from the rewritten data read from the portion of the memory array.

    Abstract translation: 可以使用错误状态缓冲区中的可重用条目实时检测和校正存储器阵列中的硬错误。 响应于从存储器阵列的部分读取的数据中的第一个错误,数据可以重写到存储器阵列和寄存器的一部分。 然后可以将重写的数据从寄存器写入错误状态缓冲器的条目,以响应于从寄存器读取的重写数据与从存储器阵列的部分读取的重写数据不同。

    Method and apparatus for cache control
    25.
    发明授权
    Method and apparatus for cache control 有权
    用于缓存控制的方法和装置

    公开(公告)号:US08832485B2

    公开(公告)日:2014-09-09

    申请号:US13854616

    申请日:2013-04-01

    Abstract: A method and apparatus for dynamically controlling a cache size is disclosed. In one embodiment, a method includes changing an operating point of a processor from a first operating point to a second operating point, and selectively removing power from one or more ways of a cache memory responsive to changing the operating point. The method further includes processing one or more instructions in the processor subsequent to removing power from the one or more ways of the cache memory, wherein said processing includes accessing one or more ways of the cache memory from which power was not removed.

    Abstract translation: 公开了一种用于动态控制高速缓存大小的方法和装置。 在一个实施例中,一种方法包括将处理器的操作点从第一操作点改变到第二操作点,以及响应于改变操作点而选择性地从高速缓冲存储器的一种或多种方式去除功率。 该方法还包括在从高速缓冲存储器的一个或多个方式移除电力之后处理处理器中的一个或多个指令,其中所述处理包括访问未去除功率的高速缓冲存储器的一种或多种方式。

    MANAGEMENT OF CACHE SIZE
    26.
    发明申请
    MANAGEMENT OF CACHE SIZE 有权
    高速缓存大小管理

    公开(公告)号:US20140181410A1

    公开(公告)日:2014-06-26

    申请号:US13723093

    申请日:2012-12-20

    Abstract: In response to a processor core exiting a low-power state, a cache is set to a minimum size so that fewer than all of the cache's entries are available to store data, thus reducing the cache's power consumption. Over time, the size of the cache can be increased to account for heightened processor activity, thus ensuring that processing efficiency is not significantly impacted by a reduced cache size. In some embodiments, the cache size is increased based on a measured processor performance metric, such as an eviction rate of the cache. In some embodiments, the cache size is increased at regular intervals until a maximum size is reached.

    Abstract translation: 响应处理器核心退出低功率状态,将高速缓存设置为最小大小,使得少于所有高速缓存的条目可用于存储数据,从而减少高速缓存的功耗。 随着时间的推移,可以增加高速缓存的大小以考虑到处理器活动的增加,从而确保处理效率不受减小的高速缓存大小的显着影响。 在一些实施例中,基于所测量的处理器性能度量(例如高速缓存的逐出速率)来增加高速缓存大小。 在一些实施例中,高速缓存大小以规则的间隔增加,直到达到最大大小。

    PREFETCHING TO A CACHE BASED ON BUFFER FULLNESS
    27.
    发明申请
    PREFETCHING TO A CACHE BASED ON BUFFER FULLNESS 有权
    基于缓冲区充实的缓存

    公开(公告)号:US20140129772A1

    公开(公告)日:2014-05-08

    申请号:US13669502

    申请日:2012-11-06

    CPC classification number: G06F12/0862 G06F12/0897

    Abstract: A processor transfers prefetch requests from their targeted cache to another cache in a memory hierarchy based on a fullness of a miss address buffer (MAB) or based on confidence levels of the prefetch requests. Each cache in the memory hierarchy is assigned a number of slots at the MAB. In response to determining the fullness of the slots assigned to a cache is above a threshold when a prefetch request to the cache is received, the processor transfers the prefetch request to the next lower level cache in the memory hierarchy. In response, the data targeted by the access request is prefetched to the next lower level cache in the memory hierarchy, and is therefore available for subsequent provision to the cache. In addition, the processor can transfer a prefetch request to lower level caches based on a confidence level of a prefetch request.

    Abstract translation: 处理器根据缺失地址缓冲区(MAB)的丰满度或基于预取请求的置信水平,将预取请求从目标缓存传输到存储器层次结构中的另一高速缓存。 内存层次结构中的每个高速缓存在MAB上分配了多个插槽。 响应于当接收到高速缓存的预取请求时,分配给高速缓存的时隙的丰满度高于阈值,则处理器将预取请求传送到存储器层级中的下一个较低级别的高速缓存。 作为响应,访问请求所针对的数据被预取到存储器层次结构中的下一个较低级缓存,因此可用于后续的缓存提供。 此外,处理器可以基于预取请求的置信水平将预取请求传送到较低级别的高速缓存。

    Accessing a cache based on an address translation buffer result

    公开(公告)号:US12287739B2

    公开(公告)日:2025-04-29

    申请号:US18064155

    申请日:2022-12-09

    Abstract: Address translation is performed to translate a virtual address targeted by a memory request (e.g., a load or memory request for data or an instruction) to a physical address. This translation is performed using an address translation buffer, e.g., a translation lookaside buffer (TLB). One or more actions are taken to reduce data access latencies for memory requests in the event of a TLB miss where the virtual address to physical address translation is not in the TLB. Examples of actions that are performed in various implementations in response to a TLB miss include bypassing level 1 (L1) and level 2 (L2) caches in the memory system, and speculatively sending the memory request to the L2 cache while checking whether the memory request is satisfied by the L1 cache.

    Speculative dram request enabling and disabling

    公开(公告)号:US12189953B2

    公开(公告)日:2025-01-07

    申请号:US17956417

    申请日:2022-09-29

    Abstract: Methods, devices, and systems for retrieving information based on cache miss prediction. It is predicted, based on a history of cache misses at a private cache, that a cache lookup for the information will miss a shared victim cache. A speculative memory request is enabled based on the prediction that the cache lookup for the information will miss the shared victim cache. The information is fetched based on the enabled speculative memory request.

Patent Agency Ranking