VARIABLE DISTANCE BYPASS BETWEEN TAG ARRAY AND DATA ARRAY PIPELINES IN A CACHE
    1.
    发明申请
    VARIABLE DISTANCE BYPASS BETWEEN TAG ARRAY AND DATA ARRAY PIPELINES IN A CACHE 有权
    标签阵列之间的可变距离旁路和缓存中的数据阵列管道

    公开(公告)号:US20140365729A1

    公开(公告)日:2014-12-11

    申请号:US13912809

    申请日:2013-06-07

    CPC classification number: G06F12/0855 G06F12/0844 G06F12/0846

    Abstract: The present application describes embodiments of techniques for picking a data array lookup request for execution in a data array pipeline a variable number of cycles behind a corresponding tag array lookup request that is concurrently executing in a tag array pipeline. Some embodiments of a method for picking the data array lookup request include picking the data array lookup request for execution in a data array pipeline of a cache concurrently with execution of a tag array lookup request in a tag array pipeline of the cache. The data array lookup request is picked for execution in response to resources of the data array pipeline becoming available after picking the tag array lookup request for execution. Some embodiments of the method may be implemented in a cache.

    Abstract translation: 本申请描述了用于在数据阵列流水线中选择用于执行数据阵列查找请求的技术的实施例,该数据阵列查找请求在标签阵列管线中同时执行的对应的标签数组查找请求后面的可变数量的循环。 用于选择数据阵列查找请求的方法的一些实施例包括在高速缓存的标签阵列管线中执行标签阵列查找请求的同时,在高速缓存的数据阵列流水线中选择用于执行的数据阵列查找请求。 选择数据数组查找请求以在执行标签数组查找请求之后响应于数据数组流水线变得可用的资源进行执行。 该方法的一些实施例可以在高速缓存中实现。

    Variable distance bypass between tag array and data array pipelines in a cache
    2.
    发明授权
    Variable distance bypass between tag array and data array pipelines in a cache 有权
    缓存中标签阵列与数据阵列管道之间的可变距离旁路

    公开(公告)号:US09529720B2

    公开(公告)日:2016-12-27

    申请号:US13912809

    申请日:2013-06-07

    CPC classification number: G06F12/0855 G06F12/0844 G06F12/0846

    Abstract: The present application describes embodiments of techniques for picking a data array lookup request for execution in a data array pipeline a variable number of cycles behind a corresponding tag array lookup request that is concurrently executing in a tag array pipeline. Some embodiments of a method for picking the data array lookup request include picking the data array lookup request for execution in a data array pipeline of a cache concurrently with execution of a tag array lookup request in a tag array pipeline of the cache. The data array lookup request is picked for execution in response to resources of the data array pipeline becoming available after picking the tag array lookup request for execution. Some embodiments of the method may be implemented in a cache.

    Abstract translation: 本申请描述了用于在数据阵列流水线中选择用于执行数据阵列查找请求的技术的实施例,该数据阵列查找请求在标签阵列管线中同时执行的对应的标签数组查找请求后面的可变数量的循环。 用于选择数据阵列查找请求的方法的一些实施例包括在高速缓存的标签阵列管线中执行标签阵列查找请求的同时,在高速缓存的数据阵列流水线中选择用于执行的数据阵列查找请求。 选择数据数组查找请求以在执行标签数组查找请求之后响应于数据数组流水线变得可用的资源进行执行。 该方法的一些实施例可以在高速缓存中实现。

    SIZE ADJUSTING CACHES BY WAY
    3.
    发明申请
    SIZE ADJUSTING CACHES BY WAY 审中-公开
    通过方式调整速度

    公开(公告)号:US20150026406A1

    公开(公告)日:2015-01-22

    申请号:US13946120

    申请日:2013-07-19

    Abstract: A size of a cache of a processing system is adjusted by ways, such that each set of the cache has the same number of ways. The cache is a set-associative cache, whereby each set includes a number of ways. In response to defined events at the processing system, a cache controller changes the number of ways of each set of the cache. For example, in response to a processor core indicating that it is entering a period of reduced activity, the cache controller can reduce the number of ways available in each set of the cache.

    Abstract translation: 通过各种方式来调整处理系统的高速缓存的大小,使得每组高速缓存具有相同数量的方式。 高速缓存是集合关联缓存,其中每个集合包括多种方式。 响应于处理系统处的定义的事件,高速缓存控制器改变每组高速缓存的路数。 例如,响应于处理器核心指示其进入减少活动的时段,高速缓存控制器可以减少高速缓存的每组中可用的路数。

    Power on die discovery in 3D stacked die architectures with varying number of stacked die

    公开(公告)号:US11610879B2

    公开(公告)日:2023-03-21

    申请号:US16226311

    申请日:2018-12-19

    Abstract: A handshake mechanism allows die discovery in a stacked die architecture that keeps inputs isolated until the handshake is complete. Power good indications are used as handshake signals between the die. A die keeps inputs isolated from above until a power good indication from the die above indicates presence of the die above. The die keeps inputs isolated from below until the die detects power is good and receives a power good indication from the die and the die below. In an implementation drivers and receivers, apart from configuration bus drivers and receivers are disabled until a fuse distribution done signal indicates that repairs have been completed. Drivers are then enabled and after a delay to ensure signals are driven, receivers are deisolated. A top die in the die stack never sees a power good indication from a die above and therefore keeps inputs from above isolated. That allows the height of the die stack to be unknown at power on.

    SIZE ADJUSTING CACHES BASED ON PROCESSOR POWER MODE
    5.
    发明申请
    SIZE ADJUSTING CACHES BASED ON PROCESSOR POWER MODE 审中-公开
    基于处理器电源模式调整高速缓存

    公开(公告)号:US20150026407A1

    公开(公告)日:2015-01-22

    申请号:US13946125

    申请日:2013-07-19

    Abstract: As a processor enters selected low-power modes, a cache is flushed of data by writing data stored at the cache to other levels of a memory hierarchy. The flushing of the cache allows the size of the cache to be reduced without suffering an additional performance penalty of writing the data at the reduced cache locations to the memory hierarchy. Accordingly, when the cache exits the selected low-power modes, it is sized to a minimum size by setting the number of ways of the cache to a minimum number. In response to defined events at the processing system, a cache controller changes the number of ways of each set of the cache.

    Abstract translation: 当处理器进入选择的低功耗模式时,通过将存储在高速缓存中的数据写入存储器层次结构的其他级别来缓冲数据。 高速缓存的刷新允许减小高速缓存的大小,而不会在将减少的高速缓存位置处的数据写入存储器层次结构方面带来额外的性能损失。 因此,当高速缓存退出所选择的低功率模式时,通过将高速缓存的路数设置为最小数量,将其设置为最小大小。 响应于处理系统处的定义的事件,高速缓存控制器改变每组高速缓存的路数。

Patent Agency Ranking