Dynamically adapting the configuration of a multi-queue cache based on access patterns
    86.
    发明授权
    Dynamically adapting the configuration of a multi-queue cache based on access patterns 有权
    根据访问模式动态调整多队列缓存的配置

    公开(公告)号:US09201804B1

    公开(公告)日:2015-12-01

    申请号:US13745523

    申请日:2013-01-18

    Applicant: Google Inc.

    Inventor: Zoltan Egyed

    Abstract: A multi-queue cache is configured with an initial configuration, where the initial configuration includes one or more queues for storing data items. Each of the one or more queues has an initial size. Thereafter, the multi-queue cache is operated according to a multi-queue cache replacement algorithm. During operation, access patterns for the multi-queue cache are analyzed. Based on the access patterns, an updated configuration for the multi-queue cache is determined. Thereafter, the configuration of the multi-queue cache is modified during operation. The modifying includes adjusting the size of at least one of the one or more queues according to the determined updated configuration for the multi-queue cache.

    Abstract translation: 多队列缓存配置有初始配置,其中初始配置包括用于存储数据项的一个或多个队列。 每个一个或多个队列具有初始大小。 此后,根据多队列高速缓存替换算法来操作多队列高速缓存。 在运行期间,分析多队列缓存的访问模式。 基于访问模式,确定多队列高速缓存的更新配置。 此后,在运行期间修改多队列高速缓存的配置。 修改包括根据所确定的用于多队列高速缓存的更新配置来调整一个或多个队列中的至少一个队列的大小。

    IMPLEMENTING COHERENT ACCELERATOR FUNCTION ISOLATION FOR VIRTUALIZATION
    88.
    发明申请
    IMPLEMENTING COHERENT ACCELERATOR FUNCTION ISOLATION FOR VIRTUALIZATION 有权
    执行相关加速器功能分离进行虚拟化

    公开(公告)号:US20150317275A1

    公开(公告)日:2015-11-05

    申请号:US14628195

    申请日:2015-02-20

    Abstract: A method, system and computer program product are provided for implementing coherent accelerator function isolation for virtualization in an input/output (IO) adapter in a computer system. A coherent accelerator provides accelerator function units (AFUs), each AFU is adapted to operate independently of the other AFUs to perform a computing task that can be implemented within application software on a processor. The AFU has access to system memory bound to the application software and is adapted to make copies of that memory within AFU memory-cache in the AFU. As part of this memory coherency domain, each of the AFU memory-cache and processor memory-cache is adapted to be aware of changes to data commonly in either cache as well as data changed in memory of which the respective cache contains a copy.

    Abstract translation: 提供了一种方法,系统和计算机程序产品,用于在计算机系统中的输入/输出(IO)适配器中实现用于虚拟化的相干加速器功能隔离。 相干加速器提供加速器功能单元(AFU),每个AFU适于独立于其他AFU操作以执行可在处理器上的应用软件内实现的计算任务。 AFU可以访问与应用软件绑定的系统内存,并适用于在AFU中的AFU存储器 - 高速缓存内复制该存储器。 作为该存储器一致性域的一部分,AFU存储器高速缓存和处理器存储器 - 高速缓存中的每一个适于意识到高速缓存中通常的数据变化以及相应高速缓存包含副本的存储器中改变的数据。

    Method and apparatus for synchronizing a cache
    89.
    发明授权
    Method and apparatus for synchronizing a cache 有权
    用于同步缓存的方法和装置

    公开(公告)号:US09098420B2

    公开(公告)日:2015-08-04

    申请号:US13279020

    申请日:2011-10-21

    CPC classification number: G06F12/0866 G06F2212/282 G06F2212/284

    Abstract: An approach is provided for segmenting a cache into one or more cache segments and synchronizing the cache segments. An cache platform causes, at least in part, a segmentation of at least one cache into one or more cache segments. The cache platform further determines that at least one cache segment of the one or more cache segments is invalid. The cache platform also causes, at least in part, a synchronization of the at least one cache segment. The approach allows for a dynamic optimization of the synchronization of the cache segments based on one or more characteristics associated with the devices and/or the connection associated with the cache synchronization.

    Abstract translation: 提供了一种用于将高速缓存分割成一个或多个高速缓存段并同步高速缓存段的方法。 至少部分地,高速缓存平台将至少一个高速缓存分割成一个或多个高速缓存段。 高速缓存平台还确定一个或多个高速缓存段的至少一个高速缓存段是无效的。 至少部分地,高速缓存平台还引起至少一个高速缓存段的同步。 该方法允许基于与设备相关联的一个或多个特性和/或与高速缓存同步相关联的连接来动态优化高速缓存段的同步。

    PARTITIONING SHARED CACHES
    90.
    发明申请
    PARTITIONING SHARED CACHES 有权
    分区共享快照

    公开(公告)号:US20150095577A1

    公开(公告)日:2015-04-02

    申请号:US14040330

    申请日:2013-09-27

    Applicant: Facebook, Inc.

    Abstract: Technology is provided for partitioning a shared unified cache in a multi-processor computer system. The technology can receive a request to allocate a portion of a shared unified cache memory for storing only executable instructions, partition the cache memory into multiple partitions, and allocate one of the partitions for storing only executable instructions. The technology can further determine the size of the portion of the cache memory to be allocated for storing only executable instructions as a function of the size of the multi-processor's L1 instruction cache and the number of cores in the multi-processor.

    Abstract translation: 提供技术用于在多处理器计算机系统中分区共享统一缓存。 该技术可以接收分配用于仅存储可执行指令的共享统一高速缓冲存储器的一部分的请求,将高速缓存存储器分割成多个分区,并且分配用于仅存储可执行指令的分区之一。 该技术可以进一步确定要分配用于仅存储可执行指令的高速缓冲存储器的部分的大小,作为多处理器的L1指令高速缓存的大小和多处理器中的核心数量的函数。

Patent Agency Ranking