L2 cache retention mode
    1.
    发明授权
    L2 cache retention mode 有权
    L2缓存保留模式

    公开(公告)号:US09513693B2

    公开(公告)日:2016-12-06

    申请号:US14224773

    申请日:2014-03-25

    Applicant: Apple Inc.

    Abstract: Systems and methods for reducing leakage power in a L2 cache within a SoC. The L2 cache is partitioned into multiple banks, and each bank has its own separate power supply. An idle counter is maintained for each bank to count a number of cycles during which the bank has been inactive. The temperature and leaky factor of the SoC are used to select an operating point of the SoC. Based on the operating point, an idle counter threshold is set, with a high temperature and high leaky factor corresponding to a relatively low idle counter threshold, and with a low temperature and low leaky factor corresponding to a relatively high idle counter threshold. When a given idle counter exceeds the idle counter threshold, the voltage supplied to the corresponding bank is reduced to a voltage sufficient for retention of data but not for access.

    Abstract translation: 降低SoC内二级缓存中漏电功率的系统和方法。 L2缓存分为多个银行,每个银行都有自己独立的电源。 为每个银行维护一个空闲计数器来计算银行已经不活动的周期数。 SoC的温度和泄漏因子用于选择SoC的工作点。 基于操作点,设置空闲计数器阈值,具有对应于相对低的空闲计数器阈值的高温度和高泄漏因子,以及对应于相对高的空闲计数器阈值的低温度和低泄漏因子。 当给定的空闲计数器超过空闲计数器阈值时,提供给相应存储体的电压降低到足以保留数据但不能访问的电压。

    Cache dependency handling
    2.
    发明授权

    公开(公告)号:US10127153B1

    公开(公告)日:2018-11-13

    申请号:US14868245

    申请日:2015-09-28

    Applicant: Apple Inc.

    Abstract: Techniques are disclosed relating to managing data-request dependencies for a cache. In one embodiment, an integrated circuit is disclosed that includes a plurality of requesting agents and a cache. The cache is configured to receive read and write requests from the plurality of requesting agents including a first request and a second request. The cache is configured to detect that the first and second requests specify addresses that correspond to different portions of the same cache line, and to determine whether to delay processing one of the first and second requests based on whether the first and second requests are from the same requesting agent. In some embodiments, the cache is configured to service the first and second requests in parallel in response to determining that the first and second requests are from the same requesting agent.

    METHODS FOR PERFORMING A MEMORY RESOURCE RETRY

    公开(公告)号:US20180276128A1

    公开(公告)日:2018-09-27

    申请号:US15996776

    申请日:2018-06-04

    Applicant: Apple Inc.

    CPC classification number: G06F12/0842 G06F2212/1024 G06F2212/283

    Abstract: In an embodiment, an apparatus includes multiple memory resources, and a resource table that includes entries that correspond to respective memory resources of the multiple memory resources. The apparatus also includes a circuit configured to receive a first memory command. The first memory command is associated with a subset of the multiple memory resources. For each memory resource of the subset, the circuit is also configured to set a respective indicator associated with the first memory command, and to store a first value in a first entry of the resource table in response to a determination that the respective memory resource is unavailable. The circuit is also configured to store a second value in each entry of the resource table that corresponds to a memory resource of the subset in response to a determination that an entry corresponding to a given memory resource of the subset includes the first value.

    METHODS FOR PERFORMING A MEMORY RESOURCE RETRY

    公开(公告)号:US20170242798A1

    公开(公告)日:2017-08-24

    申请号:US15052000

    申请日:2016-02-24

    Applicant: Apple Inc.

    CPC classification number: G06F12/0842 G06F2212/1024 G06F2212/283

    Abstract: In an embodiment, an apparatus includes multiple memory resources, and a resource table that includes entries that correspond to respective memory resources of the multiple memory resources. The apparatus also includes a circuit configured to receive a first memory command. The first memory command is associated with a subset of the multiple memory resources. For each memory resource of the subset, the circuit is also configured to set a respective indicator associated with the first memory command, and to store a first value in a first entry of the resource table in response to a determination that the respective memory resource is unavailable. The circuit is also configured to store a second value in each entry of the resource table that corresponds to a memory resource of the subset in response to a determination that an entry corresponding to a given memory resource of the subset includes the first value.

    Least recently used mechanism for cache line eviction from a cache memory
    5.
    发明授权
    Least recently used mechanism for cache line eviction from a cache memory 有权
    最近用于高速缓存存储器缓存线驱逐的最近使用的机制

    公开(公告)号:US09563575B2

    公开(公告)日:2017-02-07

    申请号:US14929645

    申请日:2015-11-02

    Applicant: Apple Inc.

    Abstract: A mechanism for evicting a cache line from a cache memory includes first selecting for eviction a least recently used cache line of a group of invalid cache lines. If all cache lines are valid, selecting for eviction a least recently used cache line of a group of cache lines in which no cache line of the group of cache lines is also stored within a higher level cache memory such as the L1 cache, for example. Lastly, if all cache lines are valid and there are no non-inclusive cache lines, selecting for eviction the least recently used cache line stored in the cache memory.

    Abstract translation: 用于从高速缓冲存储器中逐出高速缓存行的机制包括首先选择驱逐一组无效高速缓存行的最近最少使用的高速缓存行。 如果所有高速缓存行都有效,则选择驱逐,一组高速缓存行的最近最少使用的高速缓存行,其中该高速缓存行组中的高速缓存行也不存储在诸如L1高速缓存的更高级高速缓冲存储器中 。 最后,如果所有高速缓存行都是有效的,并且没有非包含的高速缓存行,则选择驱逐存储在高速缓冲存储器中的最近最少使用的高速缓存行。

    Cache way prediction
    6.
    发明授权

    公开(公告)号:US10157137B1

    公开(公告)日:2018-12-18

    申请号:US14861470

    申请日:2015-09-22

    Applicant: Apple Inc.

    Abstract: Techniques are disclosed relating to set-associative caches in processors. In one embodiment, an integrated circuit is disclosed that includes a set-associative cache configured to receive a request for a data block stored in one of a plurality of ways within the cache, the request specifying an address, a portion of which is a tag value. In such an embodiment, the integrated circuit includes a way prediction circuit configured to predict, based on the tag value, a way in which the requested data block is stored. The integrated circuit further includes a tag array circuit configured to perform a comparison of a portion of the tag value with a set of previously stored tag portions corresponding to the plurality of ways. The tag array circuit is further configured to determine whether the request hits in the cache based on the predicted way and an output of the comparison.

    Methods for performing a memory resource retry

    公开(公告)号:US09990294B2

    公开(公告)日:2018-06-05

    申请号:US15052000

    申请日:2016-02-24

    Applicant: Apple Inc.

    CPC classification number: G06F12/0842 G06F2212/1024 G06F2212/283

    Abstract: In an embodiment, an apparatus includes multiple memory resources, and a resource table that includes entries that correspond to respective memory resources of the multiple memory resources. The apparatus also includes a circuit configured to receive a first memory command. The first memory command is associated with a subset of the multiple memory resources. For each memory resource of the subset, the circuit is also configured to set a respective indicator associated with the first memory command, and to store a first value in a first entry of the resource table in response to a determination that the respective memory resource is unavailable. The circuit is also configured to store a second value in each entry of the resource table that corresponds to a memory resource of the subset in response to a determination that an entry corresponding to a given memory resource of the subset includes the first value.

    Cache pre-fetch merge in pending request buffer
    8.
    发明授权
    Cache pre-fetch merge in pending request buffer 有权
    挂起请求缓冲区中的缓存预取合并

    公开(公告)号:US09454486B2

    公开(公告)日:2016-09-27

    申请号:US13940525

    申请日:2013-07-12

    Applicant: Apple Inc.

    Abstract: An apparatus for processing cache requests in a computing system is disclosed. The apparatus may include a pending request buffer and a control circuit. The pending request buffer may include a plurality of buffer entries. The control circuit may be coupled to the pending request buffer and may be configured to receive a request for a first cache line from a pre-fetch engine, and store the received request in an entry of the pending request buffer. The control circuit may be further configured to receive a request for a second cache line from a processor, and store the request received from the processor in the entry of the pending request buffer in response to a determination that the second cache line is the same as the first cache line.

    Abstract translation: 公开了一种用于处理计算系统中的缓存请求的装置。 该装置可以包括未决请求缓冲器和控制电路。 待决请求缓冲器可以包括多个缓冲器条目。 控制电路可以耦合到未决请求缓冲器,并且可以被配置为从预取引擎接收对第一高速缓存行的请求,并将接收到的请求存储在待处理请求缓冲器的条目中。 控制电路还可以被配置成从处理器接收对第二高速缓存行的请求,并且响应于第二高速缓存行与第二高速缓存行的相同的确定,将从处理器接收的请求存储在待处理请求缓冲器的条目中 第一个缓存行。

    Least recently used mechanism for cache line eviction from a cache memory
    9.
    发明授权
    Least recently used mechanism for cache line eviction from a cache memory 有权
    最近用于高速缓存存储器缓存线驱逐的最近使用的机制

    公开(公告)号:US09176879B2

    公开(公告)日:2015-11-03

    申请号:US13946327

    申请日:2013-07-19

    Applicant: Apple Inc.

    Abstract: A mechanism for evicting a cache line from a cache memory includes first selecting for eviction a least recently used cache line of a group of invalid cache lines. If all cache lines are valid, selecting for eviction a least recently used cache line of a group of cache lines in which no cache line of the group of cache lines is also stored within a higher level cache memory such as the L1 cache, for example. Lastly, if all cache lines are valid and there are no non-inclusive cache lines, selecting for eviction the least recently used cache line stored in the cache memory.

    Abstract translation: 用于从高速缓冲存储器中逐出高速缓存行的机制包括首先选择驱逐一组无效高速缓存行的最近最少使用的高速缓存行。 如果所有高速缓存行都有效,则选择驱逐,一组高速缓存行的最近最少使用的高速缓存行,其中该高速缓存行组中的高速缓存行也不存储在诸如L1高速缓存的更高级高速缓冲存储器中 。 最后,如果所有高速缓存行都是有效的,并且没有非包含的高速缓存行,则选择驱逐存储在高速缓冲存储器中的最近最少使用的高速缓存行。

    L2 CACHE RETENTION MODE
    10.
    发明申请
    L2 CACHE RETENTION MODE 有权
    L2缓存模式

    公开(公告)号:US20150277541A1

    公开(公告)日:2015-10-01

    申请号:US14224773

    申请日:2014-03-25

    Applicant: Apple Inc.

    Abstract: Systems and methods for reducing leakage power in a L2 cache within a SoC. The L2 cache is partitioned into multiple banks, and each bank has its own separate power supply. An idle counter is maintained for each bank to count a number of cycles during which the bank has been inactive. The temperature and leaky factor of the SoC are used to select an operating point of the SoC. Based on the operating point, an idle counter threshold is set, with a high temperature and high leaky factor corresponding to a relatively low idle counter threshold, and with a low temperature and low leaky factor corresponding to a relatively high idle counter threshold. When a given idle counter exceeds the idle counter threshold, the voltage supplied to the corresponding bank is reduced to a voltage sufficient for retention of data but not for access.

    Abstract translation: 降低SoC内二级缓存中漏电功率的系统和方法。 L2缓存分为多个银行,每个银行都有自己独立的电源。 为每个银行维护一个空闲计数器来计算银行已经不活动的周期数。 SoC的温度和泄漏因子用于选择SoC的工作点。 基于操作点,设置空闲计数器阈值,具有对应于相对低的空闲计数器阈值的高温度和高泄漏因子,以及对应于相对高的空闲计数器阈值的低温度和低泄漏因子。 当给定的空闲计数器超过空闲计数器阈值时,提供给相应存储体的电压降低到足以保留数据但不能访问的电压。

Patent Agency Ranking