-
公开(公告)号:US12147366B2
公开(公告)日:2024-11-19
申请号:US17853812
申请日:2022-06-29
Applicant: Advanced Micro Devices, Inc.
Inventor: Michael J. Tresidder , Benjamin Tsien
Abstract: Systems and methods are disclosed for voltage droop mitigation associated with a voltage rail that supplies power to circuitry of a chiplet. Techniques disclosed include detecting an upcoming transmission of data packets that are to be transmitted through a physical layer of the chiplet. Then, before transmitting the data packets through the physical layer, throttling a rate of bandwidth utilization in the physical layer and transmitting, by the controller, the data packets through the physical layer.
-
2.
公开(公告)号:US20240111442A1
公开(公告)日:2024-04-04
申请号:US17936809
申请日:2022-09-29
Applicant: Advanced Micro Devices, Inc. , ATI Technologies ULC
Inventor: Ashish Jain , Shang Yang , Jun Lei , Gia Tung Phan , Oswin Hall , Benjamin Tsien , Narendra Kamat
IPC: G06F3/06
CPC classification number: G06F3/0634 , G06F3/0604 , G06F3/0653 , G06F3/0679
Abstract: Systems, apparatuses, and methods for prefetching data by a display controller. From time to time, a performance-state change of a memory are performed. During such changes, a memory clock frequency is changed for a memory subsystem storing frame buffer(s) used to drive pixels to a display device. During the performance-state change, memory accesses may be temporarily blocked. To sustain a desired quality of service for the display, a display controller is configured to prefetch data in advance of the performance-state change. In order to ensure the display controller has sufficient memory bandwidth to accomplish the prefetch, bandwidth reduction circuitry in clients of the system are configured to temporarily reduce memory bandwidth of corresponding clients.
-
公开(公告)号:US20230418753A1
公开(公告)日:2023-12-28
申请号:US17852296
申请日:2022-06-28
Applicant: Advanced Micro Devices, Inc.
Inventor: Chintan S. Patel , Alexander J. Branover , Benjamin Tsien , Edgar Munoz , Vydhyanathan Kalyanasundharam
IPC: G06F12/0871 , G06F12/0864 , G06F12/0811
CPC classification number: G06F12/0871 , G06F12/0811 , G06F12/0864
Abstract: A technique for operating a cache is disclosed. The technique includes based on a workload change, identifying a first allocation permissions policy; operating the cache according to the first allocation permissions policy; based on set sampling, identifying a second allocation permissions policy; and operating the cache according to the second allocation permissions policy.
-
公开(公告)号:US20230418745A1
公开(公告)日:2023-12-28
申请号:US17852300
申请日:2022-06-28
Applicant: Advanced Micro Devices, Inc.
IPC: G06F12/0802
CPC classification number: G06F12/0802 , G06F2212/60
Abstract: A technique for operating a cache is disclosed. The technique includes utilizing a first portion of a cache in a directly accessed manner; and utilizing a second portion of the cache as a cache.
-
公开(公告)号:US11586472B2
公开(公告)日:2023-02-21
申请号:US16709404
申请日:2019-12-10
Applicant: Advanced Micro Devices, Inc.
Inventor: Alexander J. Branover , Benjamin Tsien , Elliot H. Mednick
Abstract: A method, system, and apparatus determines that one or more tasks should be relocated from a first processor to a second processor by comparing performance metrics to associated thresholds or by using other indications. To relocate the one or more tasks from the first processor to the second processor, the first processor is stalled and state information from the first processor is copied to the second processor. The second processor uses the state information and then services incoming tasks instead of the first processor.
-
公开(公告)号:US11513973B2
公开(公告)日:2022-11-29
申请号:US16723185
申请日:2019-12-20
Applicant: ADVANCED MICRO DEVICES, INC.
Inventor: Sonu Arora , Benjamin Tsien , Alexander J. Branover
IPC: G06F12/14 , G06F12/06 , G06F12/0877 , G06F9/54 , G06F12/1027 , G06F9/50 , G06F11/30 , G06F12/1009 , G06F9/30
Abstract: A processor in a system is responsive to a coherent memory request buffer having a plurality of entries to store coherent memory requests from a client module and a non-coherent memory request buffer having a plurality of entries to store non-coherent memory requests from the client module. The client module buffers coherent and non-coherent memory requests and releases the memory requests based on one or more conditions of the processor or one of its caches. The memory requests are released to a central data fabric and into the system based on a first watermark associated with the coherent memory buffer and a second watermark associated with the non-coherent memory buffer.
-
公开(公告)号:US11422935B2
公开(公告)日:2022-08-23
申请号:US17033287
申请日:2020-09-25
Applicant: Advanced Micro Devices, Inc.
Inventor: Chintan S. Patel , Vydhyanathan Kalyanasundharam , Benjamin Tsien
IPC: G06F12/0817
Abstract: A method of controlling a cache is disclosed. The method comprises receiving a request to allocate a portion of memory to store data. The method also comprises directly mapping a portion of memory to an assigned contiguous portion of the cache memory when the request to allocate a portion of memory to store the data includes a cache residency request that the data continuously resides in cache memory. The method also comprises mapping the portion of memory to the cache memory using associative mapping when the request to allocate a portion of memory to store the data does not include a cache residency request that data continuously resides in the cache memory.
-
公开(公告)号:US11289131B2
公开(公告)日:2022-03-29
申请号:US17113322
申请日:2020-12-07
Applicant: Advanced Micro Devices, Inc.
Inventor: Benjamin Tsien , Alexander J. Branover , Alan Dodson Smith , Chintan S. Patel
IPC: G11C5/06 , G06F1/3296 , G06F13/40 , G06F1/3234 , G06F1/3203 , G06F1/3287 , G11C5/02 , G11C5/14
Abstract: Systems, apparatuses, and methods for implementing dynamic control of a multi-region fabric are disclosed. A system includes at least one or more processing units, one or more memory devices, and a communication fabric coupled to the processing unit(s) and memory device(s). The system partitions the fabric into multiple regions based on different traffic types and/or periodicities of the clients connected to the regions. For example, the system partitions the fabric into a stutter region for predictable, periodic clients and a non-stutter region for unpredictable, non-periodic clients. The system power-gates the entirety of the fabric in response to detecting a low activity condition. After power-gating the entirety of the fabric, the system periodically wakes up one or more stutter regions while keeping the other non-stutter regions in power-gated mode. Each stutter region monitors stutter client(s) for activity and processes any requests before going back into power-gated mode.
-
公开(公告)号:US11054887B2
公开(公告)日:2021-07-06
申请号:US15856546
申请日:2017-12-28
Applicant: Advanced Micro Devices, Inc.
Inventor: Benjamin Tsien , Greggory D. Donley , Bryan P. Broussard
IPC: G06F1/32 , G06F1/3287 , G06F9/50 , G06F1/3209 , G06F1/3234 , G06F1/3296
Abstract: Systems, apparatuses, and methods for performing efficient power management for a multi-node computing system are disclosed. A computing system includes multiple nodes. When power down negotiation is distributed, negotiation for system-wide power down occurs within a lower level of a node hierarchy prior to negotiation for power down occurring at a higher level of the node hierarchy. When power down negotiation is centralized, a given node combines a state of its clients with indications received on its downstream link and sends an indication on an upstream link based on the combining. Only a root node sends power down requests.
-
公开(公告)号:US10671148B2
公开(公告)日:2020-06-02
申请号:US15850261
申请日:2017-12-21
Applicant: Advanced Micro Devices, Inc.
Inventor: Benjamin Tsien , Bryan P. Broussard , Vydhyanathan Kalyanasundharam
IPC: G06F1/3296 , G06F13/26 , G06F12/0831 , G06F1/3234
Abstract: Systems, apparatuses, and methods for performing efficient power management for a multi-node computing system are disclosed. A computing system including multiple nodes utilizes a non-uniform memory access (NUMA) architecture. A first node receives a broadcast probe from a second node. The first node spoofs a miss response for a powered down third node, which prevents the third node from waking up to respond to the broadcast probe. Prior to powering down, the third node flushed its probe filter and caches, and updated its system memory with the received dirty cache lines. The computing system includes a master node for storing interrupt priorities of the multiple cores in the computing system for arbitrated interrupts. The cores store indications of fixed interrupt identifiers for each core in the computing system. Arbitrated and fixed interrupts are handled by cores with point-to-point unicast messages, rather than broadcast messages.
-
-
-
-
-
-
-
-
-