ADVANCED COARSE-GRAINED CACHE POWER MANAGEMENT
    11.
    发明申请
    ADVANCED COARSE-GRAINED CACHE POWER MANAGEMENT 有权
    先进的粗粒度高速缓存电源管理

    公开(公告)号:US20140297959A1

    公开(公告)日:2014-10-02

    申请号:US13855174

    申请日:2013-04-02

    Applicant: APPLE INC.

    Abstract: Methods and apparatuses for reducing power consumption of a system cache within a memory controller. The system cache includes multiple ways, and each way is powered independently of the other ways. A target active way count is maintained and the system cache attempts to keep the number of currently active ways equal to the target active way count. The bandwidth and allocation intention of the system cache is monitored. Based on these characteristics, the system cache adjusts the target active way count up or down, which then causes the number of currently active ways to rise or fall in response to the adjustment to the target active way count.

    Abstract translation: 用于降低存储器控制器内的系统高速缓存的功耗的方法和装置。 系统缓存包含多种方式,每种方式独立于其他方式供电。 维护目标活动方式计数,并且系统缓存尝试将当前活动方式的数量保持等于目标活动方式计数。 监控系统缓存的带宽和分配意图。 基于这些特征,系统高速缓存调整目标活动方式向上或向下计数,从而响应于对目标活动方式计数的调整,使当前活动方式的数量上升或下降。

    SYSTEM CACHE WITH DATA PENDING STATE
    12.
    发明申请
    SYSTEM CACHE WITH DATA PENDING STATE 审中-公开
    具有数据暂停状态的系统缓存

    公开(公告)号:US20140089600A1

    公开(公告)日:2014-03-27

    申请号:US13629138

    申请日:2012-09-27

    Applicant: APPLE INC.

    CPC classification number: G06F12/0859 G06F12/126 Y02D10/13

    Abstract: Methods and apparatuses for utilizing a data pending state for cache misses in a system cache. To reduce the size of a miss queue that is searched by subsequent misses, a cache line storage location is allocated in the system cache for a miss and the state of the cache line storage location is set to data pending. A subsequent request that hits to the cache line storage location will detect the data pending state and as a result, the subsequent request will be sent to a replay buffer. When the fill for the original miss comes back from external memory, the state of the cache line storage location is updated to a clean state. Then, the request stored in the replay buffer is reactivated and allowed to complete its access to the cache line storage location.

    Abstract translation: 用于在系统高速缓存中利用数据挂起状态用于高速缓存未命中的方法和装置。 为了减少由后续未命中搜索的未命中队列的大小,高速缓存行存储位置在系统高速缓存中被分配为未命中,并且高速缓存行存储位置的状态被设置为数据挂起。 命中缓存行存储位置的后续请求将检测数据待处理状态,结果将后续请求发送到重放缓冲区。 当原始错误的填充从外部存储器返回时,缓存行存储位置的状态被更新为干净状态。 然后,重新启动存储在重放缓冲区中的请求,并允许其完成对高速缓存行存储位置的访问。

    SYSTEM CACHE WITH COARSE GRAIN POWER MANAGEMENT
    13.
    发明申请
    SYSTEM CACHE WITH COARSE GRAIN POWER MANAGEMENT 有权
    具有粗粒度电力管理的系统缓存

    公开(公告)号:US20140089590A1

    公开(公告)日:2014-03-27

    申请号:US13629563

    申请日:2012-09-27

    Applicant: APPLE INC.

    CPC classification number: G06F1/3225 G06F2212/601

    Abstract: Methods and apparatuses for reducing power consumption of a system cache within a memory controller. The system cache includes multiple ways, and individual ways are powered down when cache activity is low. A maximum active way configuration register is set by software and determines the maximum number of ways which are permitted to be active. When searching for a cache line replacement candidate, a linear feedback shift register (LFSR) is used to select from the active ways. This ensures that each active way has an equal chance of getting picked for finding a replacement candidate when one or more of the ways are inactive.

    Abstract translation: 用于降低存储器控制器内的系统高速缓存的功耗的方法和装置。 系统缓存包含多种方式,缓存活动较低时,各种方式都会关闭。 最大有效方式配置寄存器由软件设置,并确定允许有效的最大路数。 当搜索高速缓存行替换候选时,线性反馈移位寄存器(LFSR)用于从活动方式中选择。 这确保了当一个或多个方式处于非活动状态时,每个活动方式都有相同的机会被选中以找到替换候选。

    Coherence processing with pre-kill mechanism to avoid duplicated transaction identifiers
    14.
    发明授权
    Coherence processing with pre-kill mechanism to avoid duplicated transaction identifiers 有权
    一致性处理与预杀机制,以避免重复的事务标识符

    公开(公告)号:US09465740B2

    公开(公告)日:2016-10-11

    申请号:US13860885

    申请日:2013-04-11

    Applicant: Apple Inc.

    CPC classification number: G06F12/0828 G06F2212/1008 G06F2212/507

    Abstract: An apparatus for processing coherency transactions in a computing system is disclosed. The apparatus may include a request queue circuit, a duplicate tag circuit, and a memory interface unit. The request queue circuit may be configured to generate a speculative read request dependent upon a received read transaction. The duplicate tag circuit may be configured to store copies of tag from one or more cache memories, and to generate a kill message in response to a determination that data requested in the received read transaction is stored in a cache memory. The memory interface unit may be configured to store the generated speculative read request dependent upon a stall condition. The stored speculative read request may be sent to a memory controller dependent upon the stall condition. The memory interface unit may be further configured to delete the speculative read request in response to the kill message.

    Abstract translation: 公开了一种用于处理计算系统中的一致性事务的装置。 该装置可以包括请求队列电路,复制标签电路和存储器接口单元。 请求队列电路可以被配置为根据所接收的读取事务来生成推测性读取请求。 重复标签电路可以被配置为存储来自一个或多个高速缓冲存储器的标签的副本,并且响应于所接收的读取事务中请求的数据被存储在高速缓冲存储器中的确定来生成杀死消息。 存储器接口单元可以被配置为根据失速条件来存储产生的推测性读取请求。 存储的推测性读取请求可以根据失速条件发送到存储器控制器。 存储器接口单元还可以被配置为响应于杀死消息来删除推测性读取请求。

    Mechanism for sharing private caches in a SoC
    15.
    发明授权
    Mechanism for sharing private caches in a SoC 有权
    在SoC中共享私有缓存的机制

    公开(公告)号:US09280471B2

    公开(公告)日:2016-03-08

    申请号:US14081549

    申请日:2013-11-15

    Applicant: Apple Inc.

    Abstract: Systems, processors, and methods for sharing an agent's private cache with other agents within a SoC. Many agents in the SoC have a private cache in addition to the shared caches and memory of the SoC. If an agent's processor is shut down or operating at less than full capacity, the agent's private cache can be shared with other agents. When a requesting agent generates a memory request and the memory request misses in the memory cache, the memory cache can allocate the memory request in a separate agent's cache rather than allocating the memory request in the memory cache.

    Abstract translation: 与SoC中的其他代理程序共享代理的私有缓存的系统,处理器和方法。 SoC中的许多代理除了SoC的共享缓存和内存之外还有一个专用缓存。 如果代理的处理器关闭或以小于满容量运行,代理的私有缓存可以与其他代理共享。 当请求代理产生存储器请求并且存储器请求丢失在存储器高速缓存中时,存储器高速缓存可以在单独的代理的高速缓存中分配存储器请求,而不是在存储器高速缓存中分配存储器请求。

    Memory power savings in idle display case
    16.
    发明授权
    Memory power savings in idle display case 有权
    空闲显示情况下的内存功耗节省

    公开(公告)号:US09261939B2

    公开(公告)日:2016-02-16

    申请号:US13890306

    申请日:2013-05-09

    Applicant: Apple Inc.

    Abstract: In an embodiment, a system includes a memory controller that includes a memory cache and a display controller configured to control a display. The system may be configured to detect that the images being displayed are essentially static, and may be configured to cause the display controller to request allocation in the memory cache for source frame buffer data. In some embodiments, the system may also alter power management configuration in the memory cache to prevent the memory cache from shutting down or reducing its effective size during the idle screen case, so that the frame buffer data may remain cached. During times that the display is dynamically changing, the frame buffer data may not be cached in the memory cache and the power management configuration may permit the shutting down/size reduction in the memory cache.

    Abstract translation: 在一个实施例中,系统包括存储器控制器,其包括存储器高速缓存和被配置为控制显示器的显示控制器。 系统可以被配置为检测正在显示的图像基本上是静态的,并且可以被配置为使得显示控制器请求在存储器高速缓存中分配源帧缓冲器数据。 在一些实施例中,系统还可以改变存储器高速缓存中的功率管理配置,以防止存储器高速缓存在空闲屏幕情况期间关闭或减小其有效大小,使得帧缓冲器数据可以保持高速缓存。 在显示器动态改变的时间期间,帧缓冲器数据可能不被缓存在存储器高速缓存中,并且电源管理配置可以允许存储器高速缓存中的关闭/大小减小。

    System cache with speculative read engine
    17.
    发明授权
    System cache with speculative read engine 有权
    具有推测阅读引擎的系统缓存

    公开(公告)号:US09201796B2

    公开(公告)日:2015-12-01

    申请号:US13629172

    申请日:2012-09-27

    Applicant: Apple Inc.

    Abstract: Methods and apparatuses for processing speculative read requests in a system cache within a memory controller. To expedite a speculative read request, the request is sent on parallel paths through the system cache. A first path goes through a speculative read engine to determine if the speculative read request meets the conditions for accessing memory. A second path involves performing a tag lookup to determine if the data referenced by the request is already in the system cache. If the speculative read request meets the conditions for accessing memory, the request is sent to a miss queue where it is held until a confirm or cancel signal is received from the tag lookup mechanism.

    Abstract translation: 用于在存储器控制器内的系统高速缓存中处理推测读请求的方法和装置。 为了加快推测读请求,请求通过系统缓存的并行路径发送。 第一条路径经过推测读取引擎,以确定推测性读取请求是否满足访问内存的条件。 第二条路径涉及执行标签查找以确定请求引用的数据是否已经在系统高速缓存中。 如果推测性读取请求满足访问存储器的条件,则该请求被发送到丢失队列,在该队列中保持该请求,直到从标签查找机制接收到确认或取消信号。

    System cache with sticky removal engine
    18.
    发明授权
    System cache with sticky removal engine 有权
    系统缓存带粘性删除引擎

    公开(公告)号:US08886886B2

    公开(公告)日:2014-11-11

    申请号:US13629865

    申请日:2012-09-28

    Applicant: Apple Inc.

    CPC classification number: G06F12/126 G06F1/3225 G06F12/0842

    Abstract: Methods and apparatuses for releasing the sticky state of cache lines for one or more group IDs. A sticky removal engine walks through the tag memory of a system cache looking for matches with a first group ID which is clearing its cache lines from the system cache. The engine clears the sticky state of each cache line belonging to the first group ID. If the engine receives a release request for a second group ID, the engine records the current index to log its progress through the tag memory. Then, the engine continues its walk through the tag memory looking for matches with either the first or second group ID. The engine wraps around to the start of the tag memory and continues its walk until reaching the recorded index for the second group ID.

    Abstract translation: 用于释放用于一个或多个组ID的高速缓存行的粘性状态的方法和装置。 粘性移除引擎遍历系统缓存的标签存储器,寻找与从系统高速缓存清除其高速缓存行的第一组ID的匹配。 引擎清除属于第一组ID的每个高速缓存行的粘性状态。 如果引擎接收到第二组ID的释放请求,则引擎记录当前索引以通过标记存储器记录其进度。 然后,引擎继续通过标签存储器查找与第一或第二组ID的匹配。 发动机卷绕到标签存储器的开头,并继续其行进直到达到第二组ID的记录索引。

    SCHEME TO ESCALATE REQUESTS WITH ADDRESS CONFLICTS
    19.
    发明申请
    SCHEME TO ESCALATE REQUESTS WITH ADDRESS CONFLICTS 有权
    计划避免与地址冲突的要求

    公开(公告)号:US20140244920A1

    公开(公告)日:2014-08-28

    申请号:US13777777

    申请日:2013-02-26

    Applicant: APPLE INC.

    CPC classification number: G06F12/084 G06F12/0859 G06F13/16 G06F2212/304

    Abstract: Techniques for escalating a real time agent's request that has an address conflict with a best effort agent's request. A best effort request can be allocated in a memory controller cache but can progress slowly in the memory system due to its low priority. Therefore, when a real time request has an address conflict with an older best effort request, the best effort request can be escalated if it is still pending when the real time request is received at the memory controller cache. Escalating the best effort request can include setting the push attribute of the best effort request or sending another request with a push attribute to bypass or push the best effort request.

    Abstract translation: 用于升级具有与尽力而为代理人请求相关的地址的实时代理请求的技术。 可以在存储器控制器高速缓存中分配尽力而为的请求,但是由于其优先级低,可以缓慢地在存储器系统中进行。 因此,当实时请求具有与较早的最佳努力请求相冲突的地址时,如果在存储器控制器高速缓存处接收到实时请求时仍然处于待命状态,则尽力而为请求可以被升级。 提升尽力而为的请求可以包括设置尽力而为请求的推送属性或者使用推送属性发送另一个请求来绕过或推送尽力而为的请求。

    Memory power savings in idle display case

    公开(公告)号:US10310586B2

    公开(公告)日:2019-06-04

    申请号:US14980912

    申请日:2015-12-28

    Applicant: Apple Inc.

    Abstract: In an embodiment, a system includes a memory controller that includes a memory cache and a display controller configured to control a display. The system may be configured to detect that the images being displayed are essentially static, and may be configured to cause the display controller to request allocation in the memory cache for source frame buffer data. In some embodiments, the system may also alter power management configuration in the memory cache to prevent the memory cache from shutting down or reducing its effective size during the idle screen case, so that the frame buffer data may remain cached. During times that the display is dynamically changing, the frame buffer data may not be cached in the memory cache and the power management configuration may permit the shutting down/size reduction in the memory cache.

Patent Agency Ranking