Hybrid Write-Through/Write-Back Cache Policy Managers, and Related Systems and Methods
    2.
    发明申请
    Hybrid Write-Through/Write-Back Cache Policy Managers, and Related Systems and Methods 有权
    混合写/高速缓存策略管理器,以及相关系统和方法

    公开(公告)号:US20130185511A1

    公开(公告)日:2013-07-18

    申请号:US13470643

    申请日:2012-05-14

    IPC分类号: G06F12/08

    摘要: Embodiments disclosed in the detailed description include hybrid write-through/write-back cache policy managers, and related systems and methods. A cache write policy manager is configured to determine whether at least two caches among a plurality of parallel caches are active. If all of one or more other caches are not active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-hack cache policy. In this manner, the cache write policy manager may conserve power and/or increase performance of a singly active processor core. If any of the one or more other caches are active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-through cache policy. In this manner, the cache write policy manager facilitates data coherency among the parallel caches when multiple processor cores are active.

    摘要翻译: 在详细描述中公开的实施例包括混合写入/回写高速缓存策略管理器以及相关的系统和方法。 高速缓存写策略管理器被配置为确定多个并行高速缓存中的至少两个高速缓存是否是活动的。 如果所有一个或多个其他高速缓存都不活动,则缓存写策略管理器被配置为指示并行高速缓存中的活动高速缓存来应用写入黑客缓存策略。 以这种方式,缓存写入策略管理器可以节省单个活动处理器核心的功率和/或提高性能。 如果一个或多个其他高速缓存中的任一个是活动的,则高速缓存写策略管理器被配置为指示并行高速缓存中的活动高速缓存来应用直写高速缓存策略。 以这种方式,当多个处理器核心处于活动状态时,缓存写入策略管理器便于并行高速缓存之间的数据一致性。

    Utilizing Negative Feedback from Unexpected Miss Addresses in a Hardware Prefetcher
    3.
    发明申请
    Utilizing Negative Feedback from Unexpected Miss Addresses in a Hardware Prefetcher 审中-公开
    在硬件预取器中利用意外错误地址的否定反馈

    公开(公告)号:US20130185515A1

    公开(公告)日:2013-07-18

    申请号:US13350909

    申请日:2012-01-16

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0862 G06F2212/6026

    摘要: Systems and methods for populating a cache using a hardware prefetcher are disclosed. A method for prefetching cache entries includes determining an initial stride value based on at least a first and second demand miss address in the cache, verifying the initial stride value based on a third demand miss address in the cache, prefetching a predetermined number of cache entries based on the verified initial stride value, determining an expected next miss address in the cache based on the verified initial stride value and addresses of the prefetched cache entries; and confirming the verified initial stride value based on comparing the expected next miss address to a next demand miss address in the cache. If the verified initial stride value is confirmed, additional cache entries are prefetched. If the verified initial stride value is not confirmed, further prefetching is stalled and an alternate stride value is determined.

    摘要翻译: 公开了使用硬件预取器填充高速缓存的系统和方法。 用于预取高速缓存条目的方法包括基于高速缓存中的至少第一和第二请求未命中地址来确定初始步幅值,基于高速缓存中的第三请求未命中地址来验证初始步幅值,预取数量的高速缓存条目 基于所验证的初始步幅值,基于所述经验证的初始步幅值和所述预取高速缓存条目的地址来确定所述高速缓存中的预期下一未命中地址; 并且基于将预期的下一个未命中地址与高速缓存中的下一个请求未命中地址进行比较来确认已验证的初始步幅值。 如果确认了验证的初始步幅值,则预取额外的高速缓存条目。 如果验证的初始步幅值未被确认,则进一步预取停止并且确定替代步幅值。

    Use of Loop and Addressing Mode Instruction Set Semantics to Direct Hardware Prefetching
    4.
    发明申请
    Use of Loop and Addressing Mode Instruction Set Semantics to Direct Hardware Prefetching 审中-公开
    使用循环和寻址模式指令集语义来直接硬件预取

    公开(公告)号:US20130185516A1

    公开(公告)日:2013-07-18

    申请号:US13350914

    申请日:2012-01-16

    IPC分类号: G06F12/12

    摘要: Systems and methods for prefetching cache lines into a cache coupled to a processor. A hardware prefetcher is configured to recognize a memory access instruction as an auto-increment-address (AIA) memory access instruction, infer a stride value from an increment field of the AIA instruction, and prefetch lines into the cache based on the stride value. Additionally or alternatively, the hardware prefetcher is configured to recognize that prefetched cache lines are part of a hardware loop, determine a maximum loop count of the hardware loop, and a remaining loop count as a difference between the maximum loop count and a number of loop iterations that have been completed, select a number of cache lines to prefetch, and truncate an actual number of cache lines to prefetch to be less than or equal to the remaining loop count, when the remaining loop count is less than the selected number of cache lines.

    摘要翻译: 将高速缓存线预取到耦合到处理器的高速缓存中的系统和方法。 硬件预取器被配置为将存储器访问指令识别为自动递增地址(AIA)存储器访问指令,从AIA指令的增量字段推断步幅值,并且基于步幅值将预取行预取到高速缓存中。 另外或替代地,硬件预取器被配置为识别预取的高速缓存行是硬件循环的一部分,确定硬件循环的最大循环计数,以及剩余循环计数作为最大循环计数和循环数之间的差 已经完成的迭代,当剩余循环数小于选定数量的缓存时,选择要预取的高速缓存行数,并将实际数量的缓存行预截取为小于或等于剩余循环计数 线条。

    Efficient Bloom filter
    5.
    发明授权
    Efficient Bloom filter 失效
    高效布鲁姆过滤器

    公开(公告)号:US07620781B2

    公开(公告)日:2009-11-17

    申请号:US11642314

    申请日:2006-12-19

    IPC分类号: G06F12/0026

    CPC分类号: G06F12/0864 Y10S707/99943

    摘要: Implementation of a Bloom filter using multiple single-ported memory slices. A control value is combined with a hashed address value such that the resultant address value has the property that one, and only one, of the k memories or slices is selected for a given input value, a, for each bank. Collisions are thereby avoided and the multiple hash accesses for a given input value, a, may be performed concurrently. Other embodiments are also described and claimed.

    摘要翻译: 使用多个单端口存储器片的Bloom过滤器的实现。 控制值与散列地址值组合,使得所得到的地址值具有对于每个存储体的给定输入值a选择k个存储器或片中仅一个且仅一个的属性。 因此避免了冲突,并且可以同时执行给定输入值a的多个哈希访问。 还描述和要求保护其他实施例。

    Efficient bloom filter
    6.
    发明申请
    Efficient bloom filter 失效
    高效绽放滤波器

    公开(公告)号:US20080147714A1

    公开(公告)日:2008-06-19

    申请号:US11642314

    申请日:2006-12-19

    IPC分类号: G06F17/30

    CPC分类号: G06F12/0864 Y10S707/99943

    摘要: Implementation of a Bloom filter using multiple single-ported memory slices. A control value is combined with a hashed address value such that the resultant address value has the property that one, and only one, of the k memories or slices is selected for a given input value, a, for each bank. Collisions are thereby avoided and the multiple hash accesses for a given input value, a, may be performed concurrently. Other embodiments are also described and claimed.

    摘要翻译: 使用多个单端口存储器片的Bloom过滤器的实现。 控制值与散列地址值组合,使得所得到的地址值具有对于每个存储体的给定输入值a选择k个存储器或片中仅一个且仅一个的属性。 因此避免了冲突,并且可以同时执行给定输入值a的多个哈希访问。 还描述和要求保护其他实施例。