-
公开(公告)号:US20220206841A1
公开(公告)日:2022-06-30
申请号:US17136725
申请日:2020-12-29
Applicant: Advanced Micro Devices, Inc.
Inventor: Bradford Michael Beckmann , Steven Tony Tye , Brian L. Sumner , Nicolai Hähnle
Abstract: Systems, apparatuses, and methods for dynamic graphics processing unit (GPU) register allocation are disclosed. A GPU includes at least a plurality of compute units (CUs), a control unit, and a plurality of registers for each CU. If a new wavefront requests more registers than are currently available on the CU, the control unit spills registers associated with stack frames at the bottom of a stack since they will not likely be used in the near future. The control unit has complete flexibility determining how many registers to spill based on dynamic demands and can prefetch the upcoming necessary fills without software involvement. Effectively, the control unit manages the physical register file as a cache. This allows younger workgroups to be dynamically descheduled so that older workgroups can allocate additional registers when needed to ensure improved fairness and better forward progress guarantees.
-
公开(公告)号:US11875197B2
公开(公告)日:2024-01-16
申请号:US17136738
申请日:2020-12-29
Applicant: Advanced Micro Devices, Inc.
Inventor: Bradford Michael Beckmann , Steven Tony Tye , Brian L. Sumner , Nicolai Hähnle
CPC classification number: G06F9/52 , G06F9/30141 , G06F9/3836 , G06T1/20
Abstract: Systems, apparatuses, and methods for managing a number of wavefronts permitted to concurrently execute in a processing system. An apparatus includes a register file with a plurality of registers and a plurality of compute units configured to execute wavefronts. A control unit of the apparatus is configured to allow a first number of wavefronts to execute concurrently on the plurality of compute units. The control unit is configured to allow no more than a second number of wavefronts to execute concurrently on the plurality of compute units, wherein the second number is less than the first number, in response to detection that thrashing of the register file is above a threshold. The control unit is configured to detect said thrashing based at least in part on a number of registers in use by executing wavefronts that spill to memory.
-
公开(公告)号:US20200379802A1
公开(公告)日:2020-12-03
申请号:US16846654
申请日:2020-04-13
Applicant: Advanced Micro Devices, Inc.
Inventor: Steven Tony Tye , Brian L. Sumner , Bradford Michael Beckmann , Sooraj Puthoor
Abstract: Systems, apparatuses, and methods for implementing continuation analysis tasks (CATs) are disclosed. In one embodiment, a system implements hardware acceleration of CATs to manage the dependencies and scheduling of an application composed of multiple tasks. In one embodiment, a continuation packet is referenced directly by a first task. When the first task completes, the first task enqueues a continuation packet on a first queue. The first task can specify on which queue to place the continuation packet. The agent responsible for the first queue dequeues and executes the continuation packet which invokes an analysis phase which is performed prior to determining which dependent tasks to enqueue. If it is determined during the analysis phase that a second task is now ready to be launched, the second task is enqueued on one of the queues. Then, an agent responsible for this queue dequeues and executes the second task.
-
公开(公告)号:US20180349145A1
公开(公告)日:2018-12-06
申请号:US15607991
申请日:2017-05-30
Applicant: Advanced Micro Devices, Inc.
Inventor: Steven Tony Tye , Brian L. Sumner , Bradford Michael Beckmann , Sooraj Puthoor
CPC classification number: G06F9/505 , G06F9/5066 , G06F2209/509
Abstract: Systems, apparatuses, and methods for implementing continuation analysis tasks (CATs) are disclosed. In one embodiment, a system implements hardware acceleration of CATs to manage the dependencies and scheduling of an application composed of multiple tasks. In one embodiment, a continuation packet is referenced directly by a first task. When the first task completes, the first task enqueues a continuation packet on a first queue. The first task can specify on which queue to place the continuation packet. The agent responsible for the first queue dequeues and executes the continuation packet which invokes an analysis phase which is performed prior to determining which dependent tasks to enqueue. If it is determined during the analysis phase that a second task is now ready to be launched, the second task is enqueued on one of the queues. Then, an agent responsible for this queue dequeues and executes the second task.
-
公开(公告)号:US20230153149A1
公开(公告)日:2023-05-18
申请号:US18154012
申请日:2023-01-12
Applicant: Advanced Micro Devices, Inc.
Inventor: Bradford Michael Beckmann , Steven Tony Tye , Brian L. Sumner , Nicolai Hähnle
CPC classification number: G06F9/4843 , G06F9/3836 , G06F15/80 , G06F11/3024 , G06F11/3006 , G06F9/30098
Abstract: Systems, apparatuses, and methods for dynamic graphics processing unit (GPU) register allocation are disclosed. A GPU includes at least a plurality of compute units (CUs), a control unit, and a plurality of registers for each CU. If a new wavefront requests more registers than are currently available on the CU, the control unit spills registers associated with stack frames at the bottom of a stack since they will not likely be used in the near future. The control unit has complete flexibility determining how many registers to spill based on dynamic demands and can prefetch the upcoming necessary fills without software involvement. Effectively, the control unit manages the physical register file as a cache. This allows younger workgroups to be dynamically descheduled so that older workgroups can allocate additional registers when needed to ensure improved fairness and better forward progress guarantees.
-
公开(公告)号:US11579922B2
公开(公告)日:2023-02-14
申请号:US17136725
申请日:2020-12-29
Applicant: Advanced Micro Devices, Inc.
Inventor: Bradford Michael Beckmann , Steven Tony Tye , Brian L. Sumner , Nicolai Hähnle
Abstract: Systems, apparatuses, and methods for dynamic graphics processing unit (GPU) register allocation are disclosed. A GPU includes at least a plurality of compute units (CUs), a control unit, and a plurality of registers for each CU. If a new wavefront requests more registers than are currently available on the CU, the control unit spills registers associated with stack frames at the bottom of a stack since they will not likely be used in the near future. The control unit has complete flexibility determining how many registers to spill based on dynamic demands and can prefetch the upcoming necessary fills without software involvement. Effectively, the control unit manages the physical register file as a cache. This allows younger workgroups to be dynamically descheduled so that older workgroups can allocate additional registers when needed to ensure improved fairness and better forward progress guarantees.
-
公开(公告)号:US11544106B2
公开(公告)日:2023-01-03
申请号:US16846654
申请日:2020-04-13
Applicant: Advanced Micro Devices, Inc.
Inventor: Steven Tony Tye , Brian L. Sumner , Bradford Michael Beckmann , Sooraj Puthoor
Abstract: Systems, apparatuses, and methods for implementing continuation analysis tasks (CATs) are disclosed. In one embodiment, a system implements hardware acceleration of CATs to manage the dependencies and scheduling of an application composed of multiple tasks. In one embodiment, a continuation packet is referenced directly by a first task. When the first task completes, the first task enqueues a continuation packet on a first queue. The first task can specify on which queue to place the continuation packet. The agent responsible for the first queue dequeues and executes the continuation packet which invokes an analysis phase which is performed prior to determining which dependent tasks to enqueue. If it is determined during the analysis phase that a second task is now ready to be launched, the second task is enqueued on one of the queues. Then, an agent responsible for this queue dequeues and executes the second task.
-
公开(公告)号:US20220206876A1
公开(公告)日:2022-06-30
申请号:US17136738
申请日:2020-12-29
Applicant: Advanced Micro Devices, Inc.
Inventor: Bradford Michael Beckmann , Steven Tony Tye , Brian L. Sumner , Nicolai Hähnle
Abstract: Systems, apparatuses, and methods for managing a number of wavefronts permitted to concurrently execute in a processing system. An apparatus includes a register file with a plurality of registers and a plurality of compute units configured to execute wavefronts. A control unit of the apparatus is configured to allow a first number of wavefronts to execute concurrently on the plurality of compute units. The control unit is configured to allow no more than a second number of wavefronts to execute concurrently on the plurality of compute units, wherein the second number is less than the first number, in response to detection that thrashing of the register file is above a threshold. The control unit is configured to detect said thrashing based at least in part on a number of registers in use by executing wavefronts that spill to memory
-
-
-
-
-
-
-