-
公开(公告)号:US20170168940A1
公开(公告)日:2017-06-15
申请号:US14969360
申请日:2015-12-15
Applicant: Apple Inc.
Inventor: Bikram Saha , Harshavardhan Kaushikkar , Wolfgang H. Klingauf
IPC: G06F12/08
CPC classification number: G06F12/0815 , G06F12/084 , G06F2212/621
Abstract: In an embodiment, an apparatus includes control circuitry and a memory configured to store a plurality of access instructions. The control circuitry is configured to determine an availability of a resource associated with a given access instruction of the plurality of access instructions. The associated resource is included in a plurality of resources. The control circuitry is also configured to determine a priority level of the given access instruction in response to a determination that the associated resource is unavailable. The control circuit is further configured to add the given access instruction to a subset of the plurality of access instructions in response to a determination that the priority level is greater than a respective priority level of each access instruction in the subset. The control circuit is also configured to remove the given access instruction from the subset in response to a determination that the associated resource is available.
-
公开(公告)号:US09317102B2
公开(公告)日:2016-04-19
申请号:US13733775
申请日:2013-01-03
Applicant: Apple Inc.
Inventor: Muditha Kanchana , Gurjeet S. Saund , Harshavardhan Kaushikkar , Erik P. Machnicki , Seye Ewedemi
CPC classification number: G06F1/3275 , G06F1/3225 , G11C5/144 , Y02D10/14
Abstract: Techniques are disclosed relating to reducing power consumption in integrated circuits. In one embodiment, an apparatus includes a cache having a set of tag structures and a power management unit. The power management unit is configured to power down a duplicate set of tag structures in responsive to the cache being powered down. In one embodiment, the cache is configured to provide, to the power management unit, an indication of whether the cache includes valid data. In such an embodiment, the power management unit is configured to power down the cache in response to the cache indicating that the cache does not include valid data. In some embodiments, the duplicate set of tag structures is located within a coherence point configured to maintain coherency between the cache and a memory.
Abstract translation: 公开了关于降低集成电路中的功耗的技术。 在一个实施例中,一种装置包括具有一组标签结构的缓存和电源管理单元。 功率管理单元被配置为响应于被断电的高速缓存而将重复的一组标签结构断电。 在一个实施例中,高速缓存被配置为向电力管理单元提供高速缓存是否包括有效数据的指示。 在这样的实施例中,功率管理单元被配置为响应于缓存指示高速缓存不包括有效数据的高速缓存来关闭高速缓存。 在一些实施例中,重复的标签结构集合位于被配置为保持高速缓存和存储器之间的一致性的相干点内。
-
公开(公告)号:US09201791B2
公开(公告)日:2015-12-01
申请号:US13736245
申请日:2013-01-08
Applicant: Apple Inc.
Inventor: Gurjeet S. Saund , Harshavardhan Kaushikkar
CPC classification number: G06F12/0815 , G06F9/46 , G06F9/466 , G06F12/0811 , G06F13/18
Abstract: Systems and methods for maintaining an order of transactions in the coherence point. The coherence point stores attributes associated with received transactions in an input request queue (IRQ). When a new transaction is received with a device ordered attribute, the IRQ is searched for other entries with the same flow ID as the new transaction. If one or more matches are found, the new transaction entry points to the entry for the most recently received transaction with the same flow ID. The new transaction is prevented from exiting the coherence point until the transaction it points to has been sent to its destination.
Abstract translation: 在一致性点保持交易顺序的系统和方法。 相干点存储与输入请求队列(IRQ)中的接收事务相关联的属性。 当接收到具有设备排序属性的新事务时,IRQ将搜索与新事务具有相同流ID的其他条目。 如果找到一个或多个匹配,则新的事务条目指向具有相同流ID的最近收到的事务的条目。 新交易被阻止退出连贯点,直到其指向的交易已发送到其目的地。
-
公开(公告)号:US09021306B2
公开(公告)日:2015-04-28
申请号:US13713654
申请日:2012-12-13
Applicant: Apple Inc.
Inventor: Harshavardhan Kaushikkar , Muditha Kanchana , Gurjeet S Saund , Odutola O Ewedemi
IPC: G06F11/00 , G06F11/273 , G06F11/22
CPC classification number: G06F11/273 , G06F11/221 , G06F11/2236
Abstract: A coherence system includes a storage array that may store duplicate tag information associated with a cache memory of a processor. The system may also include a pipeline unit that includes a number of stages to control accesses to the storage array. The pipeline unit may pass through the pipeline stages, without generating an access to the storage array, an input/output (I/O) request that is received on a fabric. The system may also include a debug engine that may reformat the I/O request from the pipeline unit into a debug request. The debug engine may send the debug request to the pipeline unit via a debug bus. In response to receiving the debug request, the pipeline unit may access the storage array. The debug engine may return to the source of the I/O request via the fabric bus, a result of the access to the storage array.
Abstract translation: 相干系统包括可存储与处理器的高速缓冲存储器相关联的重复标签信息的存储阵列。 系统还可以包括流水线单元,其包括多个级以控制对存储阵列的访问。 流水线单元可以通过流水线阶段,而不产生对存储阵列的访问,即在结构上接收的输入/输出(I / O)请求。 该系统还可以包括可以将流水线单元的I / O请求重新格式化为调试请求的调试引擎。 调试引擎可以通过调试总线将调试请求发送到流水线单元。 响应于接收到调试请求,流水线单元可以访问存储阵列。 调试引擎可以通过结构总线返回到I / O请求的源,这是访问存储阵列的结果。
-
公开(公告)号:US20240370371A1
公开(公告)日:2024-11-07
申请号:US18607128
申请日:2024-03-15
Applicant: Apple Inc.
Inventor: Per H. Hammarlund , Lior Zimet , James Vash , Gaurav Garg , Sergio Kolor , Harshavardhan Kaushikkar , Ramesh B. Gunna , Steven R. Hutsell
IPC: G06F12/0831 , G06F12/0811 , G06F12/0815 , G06F12/109 , G06F12/128 , G06F13/16 , G06F13/28 , G06F13/40 , G06F15/173 , G06F15/78
Abstract: A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.
-
公开(公告)号:US11947457B2
公开(公告)日:2024-04-02
申请号:US18058105
申请日:2022-11-22
Applicant: Apple Inc.
Inventor: James Vash , Gaurav Garg , Brian P. Lilly , Ramesh B. Gunna , Steven R. Hutsell , Lital Levy-Rubin , Per H. Hammarlund , Harshavardhan Kaushikkar
IPC: G06F12/0815 , G06F12/0831
CPC classification number: G06F12/0815 , G06F12/0831 , G06F2212/1032
Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.
-
公开(公告)号:US11675722B2
公开(公告)日:2023-06-13
申请号:US17337805
申请日:2021-06-03
Applicant: Apple Inc.
Inventor: Sergio Kolor , Sergio V. Tota , Tzach Zemer , Sagi Lahav , Jonathan M. Redshaw , Per H. Hammarlund , Eran Tamari , James Vash , Gaurav Garg , Lior Zimet , Harshavardhan Kaushikkar , Steven Fishwick , Steven R. Hutsell , Shawn M. Fukami
IPC: G06F13/40 , G06F15/173
CPC classification number: G06F13/4027 , G06F13/4022 , G06F15/17375 , G06F15/17381
Abstract: In an embodiment, a system on a chip (SOC) comprises a semiconductor die on which circuitry is formed, wherein the circuitry comprises a plurality of agents and a plurality of network switches coupled to the plurality of agents. The plurality of network switches are interconnected to form a plurality of physical and logically independent networks. A first network of the plurality of physically and logically independent networks is constructed according to a first topology and a second network of the plurality of physically and logically independent networks is constructed according to a second topology that is different from the first topology. For example, the first topology may a ring topology and the second topology may be a mesh topology. In an embodiment, coherency may be enforced on the first network and the second network may be a relaxed order network.
-
公开(公告)号:US20230169003A1
公开(公告)日:2023-06-01
申请号:US18160575
申请日:2023-01-27
Applicant: Apple Inc.
Inventor: James Vash , Gaurav Garg , Brian P. Lilly , Ramesh B. Gunna , Steven R. Hutsell , Lital Levy-Rubin , Per H. Hammarlund , Harshavardhan Kaushikkar
IPC: G06F12/0815 , G06F12/0831
CPC classification number: G06F12/0815 , G06F12/0831 , G06F2212/1032
Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.
-
公开(公告)号:US20230064526A1
公开(公告)日:2023-03-02
申请号:US17657506
申请日:2022-03-31
Applicant: Apple Inc.
Inventor: Sagi Lahav , Lital Levy - Rubin , Gaurav Garg , Gerard R. Williams, III , Samer Nassar , Per H. Hammarlund , Harshavardhan Kaushikkar , Srinivasa Rangan Sridharan , Jeff Gonion
Abstract: Techniques are disclosed relating to an I/O agent circuit. The I/O agent circuit may include one or more queues and a transaction pipeline. The I/O agent circuit may issue, to the transaction pipeline from a queue of the one or more queues, a transaction of a series of transactions enqueued in a particular order. The I/O agent circuit may generate, at the transaction pipeline, a determination to return the transaction to the queue based on a detection of one or more conditions being satisfied. Based on the determination, the I/O agent circuit may reject, at the transaction pipeline, up to a threshold number of transactions that issued from the queue after the transaction issued. The I/O agent circuit may insert the transaction at a head of the queue such that the transaction is enqueued at the queue sequentially first for the series of transactions according to the particular order.
-
公开(公告)号:US20230064187A1
公开(公告)日:2023-03-02
申请号:US17455321
申请日:2021-11-17
Applicant: Apple Inc.
Inventor: Rohit K. Gupta , Gregory S. Mathews , Harshavardhan Kaushikkar , Jeonghee Shin , Rohit Natarajan
IPC: H04L12/927 , H04L12/801 , H04L12/825
Abstract: Techniques are disclosed relating to merging virtual communication channels in a portion of a computing system. In some embodiments, a communication fabric routes first and second classes of traffic with different quality-of-service parameters, using a first virtual channel for the first class and a second virtual channel for the second class. In some embodiments, a memory controller communicates, via the fabric, using a merged virtual channel configured to handle traffic from both the first virtual channel and the second virtual channel. In some embodiments, the system limits the rate at which an agent is allowed to transmit requests of the second class of traffic, but requests by the agent for the first class of traffic are not rate limited. Disclosed techniques may improve independence of virtual channels, relative to sharing the same channel in an entire system, without unduly increasing complexity.
-
-
-
-
-
-
-
-
-