-
公开(公告)号:US20240232084A9
公开(公告)日:2024-07-11
申请号:US17957823
申请日:2022-10-19
Applicant: Advanced Micro Devices, Inc.
Inventor: Ganesh Balakrishnan , Amit Apte , Ann Ling , Vydhyanathan Kalyanasundharam
IPC: G06F12/0815
CPC classification number: G06F12/0815
Abstract: A method includes, in a cache directory, storing an entry associating a memory region with an exclusive coherency state, and in response to a memory access directed to the memory region, transmitting a demote superprobe to convert at least one cache line of the memory region from an exclusive coherency state to a shared coherency state.
-
2.
公开(公告)号:US20240134795A1
公开(公告)日:2024-04-25
申请号:US17957823
申请日:2022-10-18
Applicant: Advanced Micro Devices, Inc.
Inventor: Ganesh Balakrishnan , Amit Apte , Ann Ling , Vydhyanathan Kalyanasundharam
IPC: G06F12/0815
CPC classification number: G06F12/0815
Abstract: A method includes, in a cache directory, storing an entry associating a memory region with an exclusive coherency state, and in response to a memory access directed to the memory region, transmitting a demote superprobe to convert at least one cache line of the memory region from an exclusive coherency state to a shared coherency state.
-
公开(公告)号:US20220100661A1
公开(公告)日:2022-03-31
申请号:US17130905
申请日:2020-12-22
Applicant: Advanced Micro Devices, Inc.
Inventor: Amit Apte , Ganesh Balakrishnan , Ann Ling , Vydhyanathan Kalyanasundharam
IPC: G06F12/0817
Abstract: Disclosed are examples of a system and method to communicate cache line eviction data from a CPU subsystem to a home node over a prioritized channel and to release the cache subsystem early to process other transactions.
-
公开(公告)号:US11954033B1
公开(公告)日:2024-04-09
申请号:US17957823
申请日:2022-10-19
Applicant: Advanced Micro Devices, Inc.
Inventor: Ganesh Balakrishnan , Amit Apte , Ann Ling , Vydhyanathan Kalyanasundharam
IPC: G06F12/0815
CPC classification number: G06F12/0815
Abstract: A method includes, in a cache directory, storing an entry associating a memory region with an exclusive coherency state, and in response to a memory access directed to the memory region, transmitting a demote superprobe to convert at least one cache line of the memory region from an exclusive coherency state to a shared coherency state.
-
公开(公告)号:US11803470B2
公开(公告)日:2023-10-31
申请号:US17130905
申请日:2020-12-22
Applicant: Advanced Micro Devices, Inc.
Inventor: Amit Apte , Ganesh Balakrishnan , Ann Ling , Vydhyanathan Kalyanasundharam
IPC: G06F12/0817
CPC classification number: G06F12/0828 , G06F2212/621
Abstract: Disclosed are examples of a system and method to communicate cache line eviction data from a CPU subsystem to a home node over a prioritized channel and to release the cache subsystem early to process other transactions.
-
公开(公告)号:US11281280B2
公开(公告)日:2022-03-22
申请号:US16876325
申请日:2020-05-18
Applicant: Advanced Micro Devices, Inc. , ATI Technologies ULC
Inventor: Benjamin Tsien , Michael J. Tresidder , Ivan Yanfeng Wang , Kevin M. Lepak , Ann Ling , Richard M. Born , John P. Petry , Bryan P. Broussard , Eric Christopher Morton
IPC: G06F1/32 , G06F1/3206 , G06F1/3287 , G06F1/3234
Abstract: Systems, apparatuses, and methods for reducing chiplet interrupt latency are disclosed. A system includes one or more processing nodes, one or more memory devices, a communication fabric coupled to the processing unit(s) and memory device(s) via link interfaces, and a power management unit. The power management unit manages the power states of the various components and the link interfaces of the system. If the power management unit detects a request to wake up a given component, and the link interface to the given component is powered down, then the power management unit sends an out-of-band signal to wake up the given component in parallel with powering up the link interface. Also, when multiple link interfaces need to be powered up, the power management unit powers up the multiple link interfaces in an order which complies with voltage regulator load-step requirements while minimizing the latency of pending operations.
-
公开(公告)号:US10656696B1
公开(公告)日:2020-05-19
申请号:US15907719
申请日:2018-02-28
Applicant: Advanced Micro Devices, Inc. , ATI Technologies ULC
Inventor: Benjamin Tsien , Michael J. Tresidder , Ivan Yanfeng Wang , Kevin M. Lepak , Ann Ling , Richard M. Born , John P. Petry , Bryan P. Broussard , Eric Christopher Morton
IPC: G06F1/32 , G06F1/3206 , G06F1/3287 , G06F1/3234
Abstract: Systems, apparatuses, and methods for reducing chiplet interrupt latency are disclosed. A system includes one or more processing nodes, one or more memory devices, a communication fabric coupled to the processing unit(s) and memory device(s) via link interfaces, and a power management unit. The power management unit manages the power states of the various components and the link interfaces of the system. If the power management unit detects a request to wake up a given component, and the link interface to the given component is powered down, then the power management unit sends an out-of-band signal to wake up the given component in parallel with powering up the link interface. Also, when multiple link interfaces need to be powered up, the power management unit powers up the multiple link interfaces in an order which complies with voltage regulator load-step requirements while minimizing the latency of pending operations.
-
公开(公告)号:US20190179758A1
公开(公告)日:2019-06-13
申请号:US15839662
申请日:2017-12-12
Applicant: Advanced Micro Devices, Inc.
Inventor: Vydhyanathan Kalyanasundharam , Amit P. Apte , Ganesh Balakrishnan , Ann Ling , Ravindra N. Bhargava
IPC: G06F12/0862 , G06F12/0811 , G06F12/084 , G06F12/0831
Abstract: Systems, apparatuses, and methods for accelerating cache to cache data transfers are disclosed. A system includes at least a plurality of processing nodes and prediction units, an interconnect fabric, and a memory. A first prediction unit is configured to receive memory requests generated by a first processing node as the requests traverse the interconnect fabric on the path to memory. When the first prediction unit receives a memory request, the first prediction unit generates a prediction of whether data targeted by the request is cached by another processing node. The first prediction unit is configured to cause a speculative probe to be sent to a second processing node responsive to predicting that the data targeted by the memory request is cached by the second processing node. The speculative probe accelerates the retrieval of the data from the second processing node if the prediction is correct.
-
-
-
-
-
-
-