Application aware SoC memory cache partitioning

    公开(公告)号:US11232033B2

    公开(公告)日:2022-01-25

    申请号:US16530216

    申请日:2019-08-02

    申请人: Apple Inc.

    IPC分类号: G06F12/0842 G06F12/0895

    摘要: Systems, apparatuses, and methods for dynamically partitioning a memory cache among a plurality of agents are described. A system includes a plurality of agents, a communication fabric, a memory cache, and a lower-level memory. The partitioning of the memory cache for the active data streams of the agents is dynamically adjusted to reduce memory bandwidth and increase power savings across a wide range of applications. A memory cache driver monitors activations and characteristics of the data streams of the system. When a change is detected, the memory cache driver dynamically updates the memory cache allocation policy and quotas for the agents. The quotas specify how much of the memory cache each agent is allowed to use. The updates are communicated to the memory cache controller to enforce the new policy and enforce the new quotas for the various agents accessing the memory.

    PARALLEL COHERENCE AND MEMORY CACHE PROCESSING PIPELINES

    公开(公告)号:US20200081838A1

    公开(公告)日:2020-03-12

    申请号:US16129527

    申请日:2018-09-12

    申请人: Apple Inc.

    摘要: Systems, apparatuses, and methods for performing coherence processing and memory cache processing in parallel are disclosed. A system includes a communication fabric and a plurality of dual-processing pipelines. Each dual-processing pipeline includes a coherence processing pipeline and a memory cache processing pipeline. The communication fabric forwards a transaction to a given dual-processing pipeline, with the communication fabric selecting the given dual-processing pipeline, from the plurality of dual-processing pipelines, based on a hash of the address of the transaction. The given dual-processing pipeline performs a duplicate tag lookup in parallel with a memory cache tag lookup for the transaction. By performing the duplicate tag lookup and the memory cache tag lookup in a parallel fashion rather than in a serial fashion, latency and power consumption are reduced while performance is enhanced.

    APPLICATION AWARE SOC MEMORY CACHE PARTITIONING

    公开(公告)号:US20210034527A1

    公开(公告)日:2021-02-04

    申请号:US16530216

    申请日:2019-08-02

    申请人: Apple Inc.

    IPC分类号: G06F12/0842 G06F12/0895

    摘要: Systems, apparatuses, and methods for dynamically partitioning a memory cache among a plurality of agents are described. A system includes a plurality of agents, a communication fabric, a memory cache, and a lower-level memory. The partitioning of the memory cache for the active data streams of the agents is dynamically adjusted to reduce memory bandwidth and increase power savings across a wide range of applications. A memory cache driver monitors activations and characteristics of the data streams of the system. When a change is detected, the memory cache driver dynamically updates the memory cache allocation policy and quotas for the agents. The quotas specify how much of the memory cache each agent is allowed to use. The updates are communicated to the memory cache controller to enforce the new policy and enforce the new quotas for the various agents accessing the memory.

    Duplicate tag structure employing single-port tag RAM and dual-port state RAM
    7.
    发明授权
    Duplicate tag structure employing single-port tag RAM and dual-port state RAM 有权
    采用单端口标签RAM和双端口状态RAM的重复标签结构

    公开(公告)号:US09454482B2

    公开(公告)日:2016-09-27

    申请号:US13928636

    申请日:2013-06-27

    申请人: Apple Inc.

    IPC分类号: G06F12/00 G06F12/08

    CPC分类号: G06F12/0815

    摘要: An apparatus for processing cache requests in a computing system is disclosed. The apparatus may include a single-port memory, a dual-port memory, and a control circuit. The single-port memory may be store tag information associated with a cache memory, and the dual-port memory may be configured to store state information associated with the cache memory. The control circuit may be configured to receive a request which includes a tag address, access the tag and state information stored in the single-port memory and the dual-port memory, respectively, dependent upon the received tag address. A determination of if the data associated with the received tag address is contained in the cache memory may be made the control circuit, and the control circuit may update and store state information in the dual-port memory responsive to the determination.

    摘要翻译: 公开了一种用于处理计算系统中的缓存请求的装置。 该装置可以包括单端口存储器,双端口存储器和控制电路。 单端口存储器可以是与高速缓冲存储器相关联的存储标签信息,并且双端口存储器可以被配置为存储与高速缓冲存储器相关联的状态信息。 控制电路可以被配置为根据接收的标签地址分别接收包括标签地址,访问标签和分别存储在单端口存储器中的状态信息和双端口存储器的请求。 可以确定与接收的标签地址相关联的数据是否包含在高速缓存存储器中,并且控制电路可以响应于该确定来更新和存储双端口存储器中的状态信息。

    DEBUG ACCESS MECHANISM FOR DUPLICATE TAG STORAGE
    8.
    发明申请
    DEBUG ACCESS MECHANISM FOR DUPLICATE TAG STORAGE 有权
    用于重复标签存储的调试访问机制

    公开(公告)号:US20140173342A1

    公开(公告)日:2014-06-19

    申请号:US13713654

    申请日:2012-12-13

    申请人: APPLE INC.

    IPC分类号: G06F11/273

    摘要: A coherence system includes a storage array that may store duplicate tag information associated with a cache memory of a processor. The system may also include a pipeline unit that includes a number of stages to control accesses to the storage array. The pipeline unit may pass through the pipeline stages, without generating an access to the storage array, an input/output (I/O) request that is received on a fabric. The system may also include a debug engine that may reformat the I/O request from the pipeline unit into a debug request. The debug engine may send the debug request to the pipeline unit via a debug bus. In response to receiving the debug request, the pipeline unit may access the storage array. The debug engine may return to the source of the I/O request via the fabric bus, a result of the access to the storage array.

    摘要翻译: 相干系统包括可存储与处理器的高速缓冲存储器相关联的重复标签信息的存储阵列。 系统还可以包括流水线单元,其包括多个级以控制对存储阵列的访问。 流水线单元可以通过流水线阶段,而不产生对存储阵列的访问,即在结构上接收的输入/输出(I / O)请求。 该系统还可以包括可以将流水线单元的I / O请求重新格式化为调试请求的调试引擎。 调试引擎可以通过调试总线将调试请求发送到流水线单元。 响应于接收到调试请求,流水线单元可以访问存储阵列。 调试引擎可以通过结构总线返回到I / O请求的源,这是访问存储阵列的结果。

    Cache quota control
    9.
    发明授权

    公开(公告)号:US11914521B1

    公开(公告)日:2024-02-27

    申请号:US17809822

    申请日:2022-06-29

    申请人: Apple Inc.

    摘要: A mechanism for cache quota control is disclosed. A cache memory is configured to receive access requests from a plurality of agents, wherein a given request from a given agent of the plurality of agents specifies an identification value associated with the given agent of the plurality of agents. A cache controller is coupled to the cache memory, and is configured to store indications of current allocations of the cache memory to individual ones of the plurality of agents. The cache controller is further configured to track requests to the cache memory based on identification values specified in the requests and determine whether to update allocations of the cache memory to the individual ones of the plurality of agents based on the tracked requests.