DELAYING CACHE DATA ARRAY UPDATES
    41.
    发明申请
    DELAYING CACHE DATA ARRAY UPDATES 有权
    延迟缓存数据阵列更新

    公开(公告)号:US20150149722A1

    公开(公告)日:2015-05-28

    申请号:US14089014

    申请日:2013-11-25

    Applicant: Apple Inc.

    CPC classification number: G06F12/0811 G06F12/0842 G06F12/0857 G06F12/0888

    Abstract: Systems, methods, and apparatuses for reducing writes to the data array of a cache. A cache hierarchy includes one or more L1 caches and a L2 cache inclusive of the L2 cache(s). When a request from the L1 cache misses in the L2 cache, the L2 cache sends a fill request to memory. When the fill data returns from memory, the L2 cache delays writing the fill data to its data array. Instead, this cache line is written to the L1 cache and a clean-evict bit corresponding to the cache line is set in the L1 cache. When the L1 cache evicts this cache line, the L1 cache will write back the cache line to the L2 cache even if the cache line has not been modified.

    Abstract translation: 用于减少对缓存的数据阵列的写入的系统,方法和装置。 高速缓存层级包括一个或多个L1高速缓存和包括L2高速缓存的L2高速缓存。 当来自L1缓存的请求在L2高速缓存中丢失时,L2缓存向存储器发送填充请求。 当填充数据从存储器返回时,L2缓存延迟将填充数据写入其数据阵列。 相反,该缓存行被写入到L1高速缓存中,并且在高速缓存中设置与高速缓存行相对应的清除位。 当L1高速缓存驱逐此高速缓存行时,即使高速缓存行未被修改,L1高速缓存也将高速缓存行写回到L2高速缓存。

    MECHANISM FOR SHARING PRIVATE CACHES IN A SOC
    42.
    发明申请
    MECHANISM FOR SHARING PRIVATE CACHES IN A SOC 有权
    在SOC中共享私有缓存的机制

    公开(公告)号:US20150143044A1

    公开(公告)日:2015-05-21

    申请号:US14081549

    申请日:2013-11-15

    Applicant: APPLE INC.

    Abstract: Systems, processors, and methods for sharing an agent's private cache with other agents within a SoC. Many agents in the SoC have a private cache in addition to the shared caches and memory of the SoC. If an agent's processor is shut down or operating at less than full capacity, the agent's private cache can be shared with other agents. When a requesting agent generates a memory request and the memory request misses in the memory cache, the memory cache can allocate the memory request in a separate agent's cache rather than allocating the memory request in the memory cache.

    Abstract translation: 与SoC中的其他代理程序共享代理的私有缓存的系统,处理器和方法。 SoC中的许多代理除了SoC的共享缓存和内存之外还有一个专用缓存。 如果代理的处理器关闭或以小于满容量运行,代理的私有缓存可以与其他代理共享。 当请求代理产生存储器请求并且存储器请求丢失在存储器高速缓存中时,存储器高速缓存可以在单独的代理的高速缓存中分配存储器请求,而不是在存储器高速缓存中分配存储器请求。

    CACHE PRE-FETCH MERGE IN PENDING REQUEST BUFFER
    43.
    发明申请
    CACHE PRE-FETCH MERGE IN PENDING REQUEST BUFFER 有权
    缓存请求缓冲区中的高速缓存

    公开(公告)号:US20150019824A1

    公开(公告)日:2015-01-15

    申请号:US13940525

    申请日:2013-07-12

    Applicant: Apple Inc.

    Abstract: An apparatus for processing cache requests in a computing system is disclosed. The apparatus may include a pending request buffer and a control circuit. The pending request buffer may include a plurality of buffer entries. The control circuit may be coupled to the pending request buffer and may be configured to receive a request for a first cache line from a pre-fetch engine, and store the received request in an entry of the pending request buffer. The control circuit may be further configured to receive a request for a second cache line from a processor, and store the request received from the processor in the entry of the pending request buffer in response to a determination that the second cache line is the same as the first cache line.

    Abstract translation: 公开了一种用于处理计算系统中的缓存请求的装置。 该装置可以包括未决请求缓冲器和控制电路。 待决请求缓冲器可以包括多个缓冲器条目。 控制电路可以耦合到未决请求缓冲器,并且可以被配置为从预取引擎接收对第一高速缓存行的请求,并将接收到的请求存储在待处理请求缓冲器的条目中。 控制电路还可以被配置成从处理器接收对第二高速缓存线的请求,并且响应于确定第二高速缓存行与第二高速缓存行相同的存储将处理器接收到的请求存储在等待请求缓冲器的条目中 第一个缓存行。

    Method and Apparatus for Determining Tunable Parameters to Use in Power and Performance Management
    44.
    发明申请
    Method and Apparatus for Determining Tunable Parameters to Use in Power and Performance Management 有权
    用于确定在功率和性能管理中使用的可调参数的方法和装置

    公开(公告)号:US20140237276A1

    公开(公告)日:2014-08-21

    申请号:US13767897

    申请日:2013-02-15

    Applicant: APPLE INC.

    Abstract: Various method and apparatus embodiments for selecting tunable operating parameters in an integrated circuit (IC) are disclosed. In one embodiment, an IC includes a number of various functional blocks each having a local management circuit. The IC also includes a global management unit coupled to each of the functional blocks having a local management circuit. The management unit is configured to determine the operational state of the IC based on the respective operating states of each of the functional blocks. Responsive to determining the operational state of the IC, the management unit may provide indications of the same to the local management circuit of each of the functional blocks. The local management circuit for each of the functional blocks may select one or more tunable parameters based on the operational state determined by the management unit.

    Abstract translation: 公开了用于在集成电路(IC)中选择可调工作参数的各种方法和装置实施例。 在一个实施例中,IC包括多个各自具有本地管理电路的功能块。 IC还包括耦合到具有本地管理电路的每个功能块的全局管理单元。 管理单元被配置为基于每个功能块的各自的操作状态来确定IC的操作状态。 响应于确定IC的操作状态,管理单元可以向每个功能块的本地管理电路提供相同的指示。 每个功能块的本地管理电路可以基于由管理单元确定的操作状态来选择一个或多个可调参数。

    MANAGING FAST TO SLOW LINKS IN A BUS FABRIC
    45.
    发明申请
    MANAGING FAST TO SLOW LINKS IN A BUS FABRIC 有权
    管理快速链接在一个总线布

    公开(公告)号:US20140181571A1

    公开(公告)日:2014-06-26

    申请号:US13726437

    申请日:2012-12-24

    Applicant: APPLE INC.

    CPC classification number: G06F5/06 G06F13/38 G06F13/382

    Abstract: Systems and methods for managing fast to slow links in a bus fabric. A pair of link interface units connect agents with a clock mismatch. Each link interface unit includes an asynchronous FIFO for storing transactions that are sent over the clock domain crossing. When the command for a new transaction is ready to be sent while data for the previous transaction is still being sent, the link interface unit prevents the last data beat of the previous transaction from being sent. Instead, after a delay of one or more clock cycles, the last data beat overlaps with the command of the new transaction.

    Abstract translation: 用于管理总线结构中快速到慢速链接的系统和方法。 一对链路接口单元连接具有时钟不匹配的代理。 每个链路接口单元包括用于存储通过时钟域穿越发送的事务的异步FIFO。 当新的事务的命令准备好发送,而前一个事务的数据仍然被发送时,链接接口单元阻止发送先前事务的最后数据节拍。 相反,在一个或多个时钟周期的延迟之后,最后的数据跳转与新事务的命令重叠。

    PREFETCHING ACROSS PAGE BOUNDARIES IN HIERARCHICALLY CACHED PROCESSORS
    46.
    发明申请
    PREFETCHING ACROSS PAGE BOUNDARIES IN HIERARCHICALLY CACHED PROCESSORS 有权
    在高性能缓存处理器中的跨页面边界的前缀

    公开(公告)号:US20140149632A1

    公开(公告)日:2014-05-29

    申请号:US13689696

    申请日:2012-11-29

    Applicant: APPLE INC.

    Abstract: Processors and methods for preventing lower level prefetch units from stalling at page boundaries. An upper level prefetch unit closest to the processor core issues a preemptive request for a translation of the next page in a given prefetch stream. The upper level prefetch unit sends the translation to the lower level prefetch units prior to the lower level prefetch units reaching the end of the current page for the given prefetch stream. When the lower level prefetch units reach the boundary of the current page, instead of stopping, these prefetch units can continue to prefetch by jumping to the next physical page number provided in the translation.

    Abstract translation: 用于防止较低级别的预取单元在页面边界停止的处理器和方法。 最靠近处理器核心的高级预取单元在给定的预取流中发出对下一页的翻译的抢占请求。 在较低级预取单元到达给定预取流的当前页面的末尾之前,高级预取单元将转换发送到较低级预取单元。 当低级预取单元到达当前页面的边界而不是停止时,这些预取单元可以通过跳转到翻译中提供的下一个物理页码继续预取。

Patent Agency Ranking