TAG ACCELERATOR FOR LOW LATENCY DRAM CACHE
    21.
    发明申请

    公开(公告)号:US20190196974A1

    公开(公告)日:2019-06-27

    申请号:US15855838

    申请日:2017-12-27

    Abstract: Systems, apparatuses, and methods for implementing a tag accelerator cache are disclosed. A system includes at least a data cache and a control unit coupled to the data cache via a memory controller. The control unit includes a tag accelerator cache (TAC) for caching tag blocks fetched from the data cache. The data cache is organized such that multiple tags are retrieved in a single access. This allows hiding the tag latency penalty for future accesses to neighboring tags and improves cache bandwidth. When a tag block is fetched from the data cache, the tag block is cached in the TAC. Memory requests received by the control unit first lookup the TAC before being forwarded to the data cache. Due to the presence of spatial locality in applications, the TAC can filter out a large percentage of tag accesses to the data cache, resulting in latency and bandwidth savings.

    CACHE TO CACHE DATA TRANSFER ACCELERATION TECHNIQUES

    公开(公告)号:US20190179758A1

    公开(公告)日:2019-06-13

    申请号:US15839662

    申请日:2017-12-12

    Abstract: Systems, apparatuses, and methods for accelerating cache to cache data transfers are disclosed. A system includes at least a plurality of processing nodes and prediction units, an interconnect fabric, and a memory. A first prediction unit is configured to receive memory requests generated by a first processing node as the requests traverse the interconnect fabric on the path to memory. When the first prediction unit receives a memory request, the first prediction unit generates a prediction of whether data targeted by the request is cached by another processing node. The first prediction unit is configured to cause a speculative probe to be sent to a second processing node responsive to predicting that the data targeted by the memory request is cached by the second processing node. The speculative probe accelerates the retrieval of the data from the second processing node if the prediction is correct.

    VARIABLE DISTANCE BYPASS BETWEEN TAG ARRAY AND DATA ARRAY PIPELINES IN A CACHE
    24.
    发明申请
    VARIABLE DISTANCE BYPASS BETWEEN TAG ARRAY AND DATA ARRAY PIPELINES IN A CACHE 有权
    标签阵列之间的可变距离旁路和缓存中的数据阵列管道

    公开(公告)号:US20140365729A1

    公开(公告)日:2014-12-11

    申请号:US13912809

    申请日:2013-06-07

    CPC classification number: G06F12/0855 G06F12/0844 G06F12/0846

    Abstract: The present application describes embodiments of techniques for picking a data array lookup request for execution in a data array pipeline a variable number of cycles behind a corresponding tag array lookup request that is concurrently executing in a tag array pipeline. Some embodiments of a method for picking the data array lookup request include picking the data array lookup request for execution in a data array pipeline of a cache concurrently with execution of a tag array lookup request in a tag array pipeline of the cache. The data array lookup request is picked for execution in response to resources of the data array pipeline becoming available after picking the tag array lookup request for execution. Some embodiments of the method may be implemented in a cache.

    Abstract translation: 本申请描述了用于在数据阵列流水线中选择用于执行数据阵列查找请求的技术的实施例,该数据阵列查找请求在标签阵列管线中同时执行的对应的标签数组查找请求后面的可变数量的循环。 用于选择数据阵列查找请求的方法的一些实施例包括在高速缓存的标签阵列管线中执行标签阵列查找请求的同时,在高速缓存的数据阵列流水线中选择用于执行的数据阵列查找请求。 选择数据数组查找请求以在执行标签数组查找请求之后响应于数据数组流水线变得可用的资源进行执行。 该方法的一些实施例可以在高速缓存中实现。

Patent Agency Ranking