-
公开(公告)号:US20180314638A1
公开(公告)日:2018-11-01
申请号:US15498076
申请日:2017-04-26
Applicant: Advanced Micro Devices, Inc.
Inventor: Michael W. LeBeane , Walter B. Benton , Vinay Agarwala
IPC: G06F12/0817 , G06F12/1081 , G06F12/0831
Abstract: Methods, devices, and systems for GPU cache injection. A GPU compute node includes a network interface controller (NIC) which includes NIC receiver circuitry which can receive data for processing on the GPU, NIC transmitter circuitry which can send the data to a main memory of the GPU compute node and which can send coherence information to a coherence directory of the GPU compute node based on the data. The GPU compute node also includes a GPU which includes GPU receiver circuitry which can receive the coherence information; GPU processing circuitry which can determine, based on the coherence information, whether the data satisfies a heuristic; and GPU loading circuitry which can load the data into a cache of the GPU from the main memory if on the data satisfies the heuristic.
-
2.
公开(公告)号:US20180081715A1
公开(公告)日:2018-03-22
申请号:US15267936
申请日:2016-09-16
Applicant: Advanced Micro Devices, Inc.
Inventor: Michael W. LeBeane , Abhisek Pan , Steven K. Reinhardt
CPC classification number: G06F9/505
Abstract: Techniques for scheduling processing tasks in a device having multiple computing elements are disclosed. A network interface controller of the device receives processing tasks, for execution on the computing elements, from a network that is external to the device. The network interface controller schedules the tasks for execution on the computing devices based on policy data available to the network interface controller. A scheduler within the network interface controller, which can be implemented as a standalone processing unit (such as a microcontroller, a programmable processing core, or an application specific integrated circuit), performs such scheduling, thereby freeing the central processing unit of the device from the burden of performing scheduling operations. The scheduler schedules the tasks according to any technically feasible scheduling technique.
-
公开(公告)号:US11922207B2
公开(公告)日:2024-03-05
申请号:US16993150
申请日:2020-08-13
Applicant: Advanced Micro Devices, Inc.
Inventor: Michael W. LeBeane , Khaled Hamidouche , Brandon K. Potter
CPC classification number: G06F9/48 , G06F9/3836 , G06F9/3887 , G06F9/54 , H04L67/10 , G06T1/20
Abstract: An approach is provided for coalescing network commands in a GPU that implements a SIMT architecture. Compatible next network operations from different threads are coalesced into a single network command packet. This reduces the number of network command packets generated and issued by threads, thereby increasing efficiency, and improving throughput. The approach is applicable to any number of threads and any thread organization methodology, such as wavefronts, warps, etc.
-
公开(公告)号:US20220066946A1
公开(公告)日:2022-03-03
申请号:US17008435
申请日:2020-08-31
Applicant: Advanced Micro Devices, Inc.
Inventor: Jagadish B. Kotra , Michael W. LeBeane
IPC: G06F12/1027 , G06F12/0862 , G06F12/0846 , G06F12/0891 , G06F12/126
Abstract: Techniques are disclosed for processing address translations. The techniques include detecting a first miss for a first address translation request for a first address translation in a first translation lookaside buffer, in response to the first miss, fetching the first address translation into the first translation lookaside buffer and evicting a second address translation from the translation lookaside buffer into an instruction cache or local data share memory, detecting a second miss for a second address translation request referencing the second address translation, in the first translation lookaside buffer, and in response to the second miss, fetching the second address translation from the instruction cache or the local data share memory.
-
公开(公告)号:US10089155B2
公开(公告)日:2018-10-02
申请号:US14862038
申请日:2015-09-22
Applicant: Advanced Micro Devices, Inc.
Inventor: Michael W. LeBeane , Deepak Majeti , Mauricio Breternitz
Abstract: First and second processor cores are configured to concurrently execute tasks. A scheduler is configured to schedule tasks for execution by the first and second processor cores. The first processor core is configured to selectively steal a task that was previously scheduled for execution by the second processor core based on additional power consumption incurred by migrating the task from the second processor core to the first processor core.
-
公开(公告)号:US20220100391A1
公开(公告)日:2022-03-31
申请号:US17033170
申请日:2020-09-25
Applicant: Advanced Micro Devices, Inc.
Inventor: Michael W. LeBeane , Khaled Hamidouche , Hari S. Thangirala , Brandon Keith Potter
IPC: G06F3/06 , G06F12/02 , G06F12/0802
Abstract: A framework disclosed herein extends a relaxed, scoped memory model to a system that includes nodes across a commodity network and maintains coherency across the system. A new scope, cluster scope, is defined, that allows for memory accesses at scopes less than cluster scope to operate on locally cached versions of remote data from across the commodity network without having to issue expensive network operations. Cluster scope operations generate network commands that are used to synchronize memory across the commodity network.
-
公开(公告)号:US10936533B2
公开(公告)日:2021-03-02
申请号:US15297079
申请日:2016-10-18
Applicant: Advanced Micro Devices, Inc.
Inventor: Michael W. LeBeane , Steven K. Reinhardt
IPC: G06F15/16 , G06F15/173 , H04L12/861 , H04L12/863
Abstract: Methods, devices, and systems for transmitting data over a computer communications network are disclosed. A queue of communications commands can be pre-generated using a central processing unit (CPU) and stored in a device memory of a network interface controller (NIC). Thereafter, if a graphics processing unit (GPU) has data to communicate to a remote GPU, it can store the data in a send buffer, where the location in the buffer is pointed to by a pre-generated command. The GPU can then signal to the interface device that the data is ready, triggering execution of the pre-generated command to send the data.
-
公开(公告)号:US20200034195A1
公开(公告)日:2020-01-30
申请号:US16049216
申请日:2018-07-30
Applicant: Advanced Micro Devices, Inc.
Inventor: Michael W. LeBeane , Khaled Hamidouche , Bradford M. Beckmann
Abstract: Techniques for improved networking performance in systems where a graphics processing unit or other highly parallel non-central-processing-unit (referred to as an accelerated processing device or “APD” herein) has the ability to directly issue commands to a networking device such as a network interface controller (“NIC”) are disclosed. According to a first technique, the latency associated with loading certain metadata into NIC hardware memory is reduced or eliminated by pre-fetching network command queue metadata into hardware network command queue metadata slots of the NIC, thereby reducing the latency associated with fetching that metadata at a later time. A second technique involves reducing latency by prioritizing work on an APD when it is known that certain network traffic is soon to arrive over the network via a NIC.
-
公开(公告)号:US12086422B2
公开(公告)日:2024-09-10
申请号:US18320819
申请日:2023-05-19
Applicant: Advanced Micro Devices, Inc.
Inventor: Michael W. LeBeane , Khaled Hamidouche , Hari S. Thangirala , Brandon Keith Potter
IPC: G06F3/06 , G06F12/02 , G06F12/0802
CPC classification number: G06F3/0619 , G06F3/0656 , G06F3/067 , G06F12/0223 , G06F12/0802 , G06F2212/152
Abstract: A framework disclosed herein extends a relaxed, scoped memory model to a system that includes nodes across a commodity network and maintains coherency across the system. A new scope, cluster scope, is defined, that allows for memory accesses at scopes less than cluster scope to operate on locally cached versions of remote data from across the commodity network without having to issue expensive network operations. Cluster scope operations generate network commands that are used to synchronize memory across the commodity network.
-
公开(公告)号:US20230289070A1
公开(公告)日:2023-09-14
申请号:US18320819
申请日:2023-05-19
Applicant: Advanced Micro Devices, Inc.
Inventor: Michael W. LeBeane , Khaled Hamidouche , Hari S. Thangirala , Brandon Keith Potter
IPC: G06F3/06 , G06F12/02 , G06F12/0802
CPC classification number: G06F3/0619 , G06F12/0223 , G06F3/0656 , G06F3/067 , G06F12/0802 , G06F2212/152
Abstract: A framework disclosed herein extends a relaxed, scoped memory model to a system that includes nodes across a commodity network and maintains coherency across the system. A new scope, cluster scope, is defined, that allows for memory accesses at scopes less than cluster scope to operate on locally cached versions of remote data from across the commodity network without having to issue expensive network operations. Cluster scope operations generate network commands that are used to synchronize memory across the commodity network.
-
-
-
-
-
-
-
-
-