METHODS OF OVERRIDING A RESOURCE RETRY
    51.
    发明申请

    公开(公告)号:US20170168940A1

    公开(公告)日:2017-06-15

    申请号:US14969360

    申请日:2015-12-15

    Applicant: Apple Inc.

    CPC classification number: G06F12/0815 G06F12/084 G06F2212/621

    Abstract: In an embodiment, an apparatus includes control circuitry and a memory configured to store a plurality of access instructions. The control circuitry is configured to determine an availability of a resource associated with a given access instruction of the plurality of access instructions. The associated resource is included in a plurality of resources. The control circuitry is also configured to determine a priority level of the given access instruction in response to a determination that the associated resource is unavailable. The control circuit is further configured to add the given access instruction to a subset of the plurality of access instructions in response to a determination that the priority level is greater than a respective priority level of each access instruction in the subset. The control circuit is also configured to remove the given access instruction from the subset in response to a determination that the associated resource is available.

    Power control for cache structures
    52.
    发明授权
    Power control for cache structures 有权
    缓存结构的功率控制

    公开(公告)号:US09317102B2

    公开(公告)日:2016-04-19

    申请号:US13733775

    申请日:2013-01-03

    Applicant: Apple Inc.

    CPC classification number: G06F1/3275 G06F1/3225 G11C5/144 Y02D10/14

    Abstract: Techniques are disclosed relating to reducing power consumption in integrated circuits. In one embodiment, an apparatus includes a cache having a set of tag structures and a power management unit. The power management unit is configured to power down a duplicate set of tag structures in responsive to the cache being powered down. In one embodiment, the cache is configured to provide, to the power management unit, an indication of whether the cache includes valid data. In such an embodiment, the power management unit is configured to power down the cache in response to the cache indicating that the cache does not include valid data. In some embodiments, the duplicate set of tag structures is located within a coherence point configured to maintain coherency between the cache and a memory.

    Abstract translation: 公开了关于降低集成电路中的功耗的技术。 在一个实施例中,一种装置包括具有一组标签结构的缓存和电源管理单元。 功率管理单元被配置为响应于被断电的高速缓存而将重复的一组标签结构断电。 在一个实施例中,高速缓存被配置为向电力管理单元提供高速缓存是否包括有效数据的指示。 在这样的实施例中,功率管理单元被配置为响应于缓存指示高速缓存不包括有效数据的高速缓存来关闭高速缓存。 在一些实施例中,重复的标签结构集合位于被配置为保持高速缓存和存储器之间的一致性的相干点内。

    Flow-ID dependency checking logic
    53.
    发明授权
    Flow-ID dependency checking logic 有权
    流ID依赖检查逻辑

    公开(公告)号:US09201791B2

    公开(公告)日:2015-12-01

    申请号:US13736245

    申请日:2013-01-08

    Applicant: Apple Inc.

    CPC classification number: G06F12/0815 G06F9/46 G06F9/466 G06F12/0811 G06F13/18

    Abstract: Systems and methods for maintaining an order of transactions in the coherence point. The coherence point stores attributes associated with received transactions in an input request queue (IRQ). When a new transaction is received with a device ordered attribute, the IRQ is searched for other entries with the same flow ID as the new transaction. If one or more matches are found, the new transaction entry points to the entry for the most recently received transaction with the same flow ID. The new transaction is prevented from exiting the coherence point until the transaction it points to has been sent to its destination.

    Abstract translation: 在一致性点保持交易顺序的系统和方法。 相干点存储与输入请求队列(IRQ)中的接收事务相关联的属性。 当接收到具有设备排序属性的新事务时,IRQ将搜索与新事务具有相同流ID的其他条目。 如果找到一个或多个匹配,则新的事务条目指向具有相同流ID的最近收到的事务的条目。 新交易被阻止退出连贯点,直到其指向的交易已发送到其目的地。

    Debug access mechanism for duplicate tag storage
    54.
    发明授权
    Debug access mechanism for duplicate tag storage 有权
    重复标签存储的调试访问机制

    公开(公告)号:US09021306B2

    公开(公告)日:2015-04-28

    申请号:US13713654

    申请日:2012-12-13

    Applicant: Apple Inc.

    CPC classification number: G06F11/273 G06F11/221 G06F11/2236

    Abstract: A coherence system includes a storage array that may store duplicate tag information associated with a cache memory of a processor. The system may also include a pipeline unit that includes a number of stages to control accesses to the storage array. The pipeline unit may pass through the pipeline stages, without generating an access to the storage array, an input/output (I/O) request that is received on a fabric. The system may also include a debug engine that may reformat the I/O request from the pipeline unit into a debug request. The debug engine may send the debug request to the pipeline unit via a debug bus. In response to receiving the debug request, the pipeline unit may access the storage array. The debug engine may return to the source of the I/O request via the fabric bus, a result of the access to the storage array.

    Abstract translation: 相干系统包括可存储与处理器的高速缓冲存储器相关联的重复标签信息的存储阵列。 系统还可以包括流水线单元,其包括多个级以控制对存储阵列的访问。 流水线单元可以通过流水线阶段,而不产生对存储阵列的访问,即在结构上接收的输入/输出(I / O)请求。 该系统还可以包括可以将流水线单元的I / O请求重新格式化为调试请求的调试引擎。 调试引擎可以通过调试总线将调试请求发送到流水线单元。 响应于接收到调试请求,流水线单元可以访问存储阵列。 调试引擎可以通过结构总线返回到I / O请求的源,这是访问存储阵列的结果。

    Scalable cache coherency protocol
    56.
    发明授权

    公开(公告)号:US11947457B2

    公开(公告)日:2024-04-02

    申请号:US18058105

    申请日:2022-11-22

    Applicant: Apple Inc.

    CPC classification number: G06F12/0815 G06F12/0831 G06F2212/1032

    Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.

    Scalable Cache Coherency Protocol
    58.
    发明公开

    公开(公告)号:US20230169003A1

    公开(公告)日:2023-06-01

    申请号:US18160575

    申请日:2023-01-27

    Applicant: Apple Inc.

    CPC classification number: G06F12/0815 G06F12/0831 G06F2212/1032

    Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.

    Ensuring Transactional Ordering in I/O Agent

    公开(公告)号:US20230064526A1

    公开(公告)日:2023-03-02

    申请号:US17657506

    申请日:2022-03-31

    Applicant: Apple Inc.

    Abstract: Techniques are disclosed relating to an I/O agent circuit. The I/O agent circuit may include one or more queues and a transaction pipeline. The I/O agent circuit may issue, to the transaction pipeline from a queue of the one or more queues, a transaction of a series of transactions enqueued in a particular order. The I/O agent circuit may generate, at the transaction pipeline, a determination to return the transaction to the queue based on a detection of one or more conditions being satisfied. Based on the determination, the I/O agent circuit may reject, at the transaction pipeline, up to a threshold number of transactions that issued from the queue after the transaction issued. The I/O agent circuit may insert the transaction at a head of the queue such that the transaction is enqueued at the queue sequentially first for the series of transactions according to the particular order.

    Communication Channels with both Shared and Independent Resources

    公开(公告)号:US20230064187A1

    公开(公告)日:2023-03-02

    申请号:US17455321

    申请日:2021-11-17

    Applicant: Apple Inc.

    Abstract: Techniques are disclosed relating to merging virtual communication channels in a portion of a computing system. In some embodiments, a communication fabric routes first and second classes of traffic with different quality-of-service parameters, using a first virtual channel for the first class and a second virtual channel for the second class. In some embodiments, a memory controller communicates, via the fabric, using a merged virtual channel configured to handle traffic from both the first virtual channel and the second virtual channel. In some embodiments, the system limits the rate at which an agent is allowed to transmit requests of the second class of traffic, but requests by the agent for the first class of traffic are not rate limited. Disclosed techniques may improve independence of virtual channels, relative to sharing the same channel in an entire system, without unduly increasing complexity.

Patent Agency Ranking