Abstract:
A method for handling parallel processing clients associated with a server in a GPU, the method comprising: receiving a failure indication for at least client running a thread in the GPU; determining threads in the GPU associated with the failing client; exiting threads in the GPU associated with the failing client; and continuing to execute remaining threads in the GPU for other clients running threads in the GPU.
Abstract:
A method for analyzing race conditions between multiple threads of an application is disclosed. The method comprises accessing hazard records for an application under test. It further comprises creating a graph comprising a plurality of vertices and a plurality of edges using the hazard records, wherein each vertex of the graph comprises information about a code location of a hazard and wherein each edge of the graph comprises hazard information between one or more vertices. Additionally, it comprises assigning each edge with a weight, wherein the weight depends on a number and relative priority of hazards associated with a respective edge. Finally, it comprises traversing the graph to report an analysis record for each hazard represented in the graph.
Abstract:
Various embodiments include techniques for accessing extended memory in a parallel processing system via a high-bandwidth path to extended memory residing on a central processing unit. The disclosed extended memory system extends the directly addressable high-bandwidth memory local to a parallel processing system and avoids the performance penalties associated with low-bandwidth system memory. As a result, execution threads that are highly parallelizable and access a large memory space execute with increased performance on a parallel processing system relative to prior approaches.
Abstract:
The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one embodiment, the presented new approach or solution uses Operating System (OS) allocation on the central processing unit (CPU) combined with graphics processing unit (GPU) driver mappings to provide a unified virtual address (VA) across both GPU and CPU. The new approach helps ensure that a GPU VA pointer does not collide with a CPU pointer provided by OS CPU allocation (e.g., like one returned by “malloc” C runtime API, etc.).