-
公开(公告)号:US20230083397A1
公开(公告)日:2023-03-16
申请号:US18058105
申请日:2022-11-22
Applicant: Apple Inc.
Inventor: James Vash , Gaurav Garg , Brian P. Lilly , Ramesh B. Gunna , Steven R. Hutsell , Lital Levy-Rubin , Per H. Hammarlund , Harshavardhan Kaushikkar
IPC: G06F12/0815 , G06F12/0831
Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.
-
公开(公告)号:US11544193B2
公开(公告)日:2023-01-03
申请号:US17315725
申请日:2021-05-10
Applicant: Apple Inc.
Inventor: James Vash , Gaurav Garg , Brian P. Lilly , Ramesh B. Gunna , Steven R. Hutsell , Lital Levy-Rubin , Per H. Hammarlund , Harshavardhan Kaushikkar
IPC: G06F12/0815 , G06F12/0831
Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.
-
公开(公告)号:US20220342471A1
公开(公告)日:2022-10-27
申请号:US17812086
申请日:2022-07-12
Applicant: Apple Inc.
Inventor: Matthias Knoth , Srikanth Balasubramanian , Venkatram Krishnaswamy , Ramesh B. Gunna
IPC: G06F1/28 , G06Q50/06 , G06F1/3228
Abstract: An apparatus includes an execute circuit configured to execute a plurality of operations received from a queue, as well as a power estimator circuit, and a power sensing circuit. The power estimator circuit is configured to predict power consumption due to execution of a particular operation of the plurality of operations, and to withdraw, based on the predicted power consumption, a first amount of power credits from a power credit pool. The power sensing circuit is configured to monitor one or more characteristics of a power supply node coupled to the execute circuit to generate a power value, and to deposit a second amount of power credits into the power credit pool. The second amount of power credits may be based on the power value indicating that power consumed during the execution of the particular operation is less than the predicted power consumption.
-
公开(公告)号:US20190286218A1
公开(公告)日:2019-09-19
申请号:US16363517
申请日:2019-03-25
Applicant: Apple Inc.
Inventor: Conrado Blasco , Ronald P. Hall , Ramesh B. Gunna , Ian D. Kountanis , Shyam Sundar , André Seznec
IPC: G06F1/3237 , G06F1/3296 , G06F1/3234 , G06F1/324 , G06F9/38
Abstract: A processor includes a mechanism for disabling a memory array of a branch prediction unit. The processor may include a next fetch prediction unit that may include a number of entries. Each entry may correspond to a next instruction fetch group and may store an indication of whether or not the corresponding the next fetch group includes a conditional branch instruction. In response to an indication that the next fetch group does not include a conditional branch instruction, the fetch prediction unit may be configured to disable, in a next instruction execution cycle, the memory array of the branch prediction unit.
-
公开(公告)号:US10147464B1
公开(公告)日:2018-12-04
申请号:US15628017
申请日:2017-06-20
Applicant: Apple Inc.
Inventor: Shih-Chieh Wen , Jong-Suk Lee , Ramesh B. Gunna
IPC: G06F12/08 , G11C5/14 , G06F1/26 , G06F9/28 , G06F1/30 , G06F11/20 , G06F11/30 , G06F1/00 , G06F9/00
Abstract: An IC in which a power state of a circuit in one power domain is managed based at least in part on a power state of a circuit in another power domain is disclosed. In one embodiment, an IC includes first and second functional circuit blocks in first and second power domains, respectively. A third functional block shared by the first and second is also implemented in the first power domain. A power management unit may control power states of each of the first, second, and third functional circuit blocks. The power management circuit may, when the first functional circuit block is in a sleep state, set a power state of the third functional block in accordance with that of the second functional circuit block.
-
公开(公告)号:US20240370371A1
公开(公告)日:2024-11-07
申请号:US18607128
申请日:2024-03-15
Applicant: Apple Inc.
Inventor: Per H. Hammarlund , Lior Zimet , James Vash , Gaurav Garg , Sergio Kolor , Harshavardhan Kaushikkar , Ramesh B. Gunna , Steven R. Hutsell
IPC: G06F12/0831 , G06F12/0811 , G06F12/0815 , G06F12/109 , G06F12/128 , G06F13/16 , G06F13/28 , G06F13/40 , G06F15/173 , G06F15/78
Abstract: A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.
-
公开(公告)号:US11947457B2
公开(公告)日:2024-04-02
申请号:US18058105
申请日:2022-11-22
Applicant: Apple Inc.
Inventor: James Vash , Gaurav Garg , Brian P. Lilly , Ramesh B. Gunna , Steven R. Hutsell , Lital Levy-Rubin , Per H. Hammarlund , Harshavardhan Kaushikkar
IPC: G06F12/0815 , G06F12/0831
CPC classification number: G06F12/0815 , G06F12/0831 , G06F2212/1032
Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.
-
公开(公告)号:US11675409B2
公开(公告)日:2023-06-13
申请号:US17812086
申请日:2022-07-12
Applicant: Apple Inc.
Inventor: Matthias Knoth , Srikanth Balasubramanian , Venkatram Krishnaswamy , Ramesh B. Gunna
IPC: G06F1/28 , G06Q50/06 , G06F1/3228
CPC classification number: G06F1/28 , G06F1/3228 , G06Q50/06
Abstract: An apparatus includes an execute circuit configured to execute a plurality of operations received from a queue, as well as a power estimator circuit, and a power sensing circuit. The power estimator circuit is configured to predict power consumption due to execution of a particular operation of the plurality of operations, and to withdraw, based on the predicted power consumption, a first amount of power credits from a power credit pool. The power sensing circuit is configured to monitor one or more characteristics of a power supply node coupled to the execute circuit to generate a power value, and to deposit a second amount of power credits into the power credit pool. The second amount of power credits may be based on the power value indicating that power consumed during the execution of the particular operation is less than the predicted power consumption.
-
公开(公告)号:US20230169003A1
公开(公告)日:2023-06-01
申请号:US18160575
申请日:2023-01-27
Applicant: Apple Inc.
Inventor: James Vash , Gaurav Garg , Brian P. Lilly , Ramesh B. Gunna , Steven R. Hutsell , Lital Levy-Rubin , Per H. Hammarlund , Harshavardhan Kaushikkar
IPC: G06F12/0815 , G06F12/0831
CPC classification number: G06F12/0815 , G06F12/0831 , G06F2212/1032
Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.
-
公开(公告)号:US20230056044A1
公开(公告)日:2023-02-23
申请号:US17821296
申请日:2022-08-22
Applicant: Apple Inc.
Inventor: Per H. Hammarlund , Lior Zimet , James Vash , Gaurav Garg , Sergio Kolor , Harshavardhan Kaushikkar , Ramesh B. Gunna , Steven R. Hutsell
IPC: G06F13/28 , G06F12/0815 , G06F12/109
Abstract: A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.
-
-
-
-
-
-
-
-
-