Abstract:
An adaptive contention-aware thread scheduler may place software threads for pairs of applications on the same socket of a multi-socket machine for execution in parallel. Initial placements may be based on profile data that characterizes the machine and its behavior when multiple applications execute on the same socket. The profile data may be collected during execution of other applications. It may identify performance counters within the cores of the processor sockets whose values are suitable for predicting whether the performance of a pair of applications will suffer when executed together on the same socket (e.g., values indicative of their demands for particular shared resources). During execution, the scheduler may examine the performance counters (or performance metrics derived therefrom) and determine different placement decisions (e.g., placing an application with high demand for resources of one type together with an application with low demand for those resources).
Abstract:
Techniques are provided for reducing synchronization of tasks in a task scheduling system. A task queue includes multiple tasks, some of which require an I/O operation while other tasks require data stored locally in memory. A single thread is assigned to process tasks in the task queue. The thread determines if a task at the head of the task queue requires an I/O operation. If so, then the thread generates an I/O request, submits the I/O request, and may place the task at (or toward) the end of the task queue. When the task reaches the head of the task queue again, the thread determines if data requested by the I/O request is available yet. If so, then the thread processes the request. Otherwise, the thread may place the task at (or toward) the end of the task queue again.
Abstract:
Multi-core computers may implement a resource management layer between the operating system and resource-management-enabled parallel runtime systems. The resource management components and runtime systems may collectively implement dynamic co-scheduling of hardware contexts when executing multiple parallel applications, using a spatial scheduling policy that grants high priority to one application per hardware context and a temporal scheduling policy for re-allocating unused hardware contexts. The runtime systems may receive resources on a varying number of hardware contexts as demands of the applications change over time, and the resource management components may co-ordinate to leave one runnable software thread for each hardware context. Periodic check-in operations may be used to determine (at times convenient to the applications) when hardware contexts should be re-allocated. Over-subscription of worker threads may reduce load imbalances between applications. A co-ordination table may store per-hardware-context information about resource demands and allocations.
Abstract:
Fast modern interconnects may be exploited to control when garbage collection is performed on the nodes (e.g., virtual machines, such as JVMs) of a distributed system in which the individual processes communicate with each other and in which the heap memory is not shared. A garbage collection coordination mechanism (a coordinator implemented by a dedicated process on a single node or distributed across the nodes) may obtain or receive state information from each of the nodes and apply one of multiple supported garbage collection coordination policies to reduce the impact of garbage collection pauses, dependent on that information. For example, if the information indicates that a node is about to collect, the coordinator may trigger a collection on all of the other nodes (e.g., synchronizing collection pauses for batch-mode applications where throughput is important) or may steer requests to other nodes (e.g., for interactive applications where request latencies are important).
Abstract:
Multi-core computers may implement a resource management layer between the operating system and resource-management-enabled parallel runtime systems. The resource management components and runtime systems may collectively implement dynamic co-scheduling of hardware contexts when executing multiple parallel applications, using a spatial scheduling policy that grants high priority to one application per hardware context and a temporal scheduling policy for re-allocating unused hardware contexts. The runtime systems may receive resources on a varying number of hardware contexts as demands of the applications change over time, and the resource management components may co-ordinate to leave one runnable software thread for each hardware context. Periodic check-in operations may be used to determine (at times convenient to the applications) when hardware contexts should be re-allocated. Over-subscription of worker threads may reduce load imbalances between applications. A co-ordination table may store per-hardware-context information about resource demands and allocations.
Abstract:
A computer system including one or more processors and persistent, word-addressable memory implements a persistent atomic multi-word compare-and-swap operation. On entry, a list of persistent memory locations of words to be updated, respective expected current values contained the persistent memory locations and respective new values to write to the persistent memory locations are provided. The operation atomically performs the process of comparing the existing contents of the persistent memory locations to the respective current values and, should they match, updating the persistent memory locations with the new values and returning a successful status. Should any of the contents of the persistent memory locations not match a respective current value, the operation returns a failed status. The operation is performed such that the system can recover from any failure or interruption by restoring the list of persistent memory locations.
Abstract:
Adaptive data collections may include various type of data arrays, sets, bags, maps, and other data structures. A simple interface for each adaptive collection may provide access via a unified API to adaptive implementations of the collection. A single adaptive data collection may include multiple, different adaptive implementations. A system configured to implement adaptive data collections may include the ability to adaptively select between various implementations, either manually or automatically, and to map a given workload to differing hardware configurations. Additionally, hardware resource needs of different configurations may be predicted from a small number of workload measurements. Adaptive data collections may provide language interoperability, such as by leveraging runtime compilation to build adaptive data collections and to compile and optimize implementation code and user code together. Adaptive data collections may also provide language-independent such that implementation code may be written once and subsequently used from multiple programming languages.
Abstract:
Fast modern interconnects may be exploited to control when garbage collection is performed on the nodes (e.g., virtual machines, such as JVMs) of a distributed system in which the individual processes communicate with each other and in which the heap memory is not shared. A garbage collection coordination mechanism (a coordinator implemented by a dedicated process on a single node or distributed across the nodes) may obtain or receive state information from each of the nodes and apply one of multiple supported garbage collection coordination policies to reduce the impact of garbage collection pauses, dependent on that information. For example, if the information indicates that a node is about to collect, the coordinator may trigger a collection on all of the other nodes (e.g., synchronizing collection pauses for batch-mode applications where throughput is important) or may steer requests to other nodes (e.g., for interactive applications where request latencies are important).
Abstract:
A runtime system for distributing work between multiple threads in multi-socket shared memory machines that may support fine-grained scheduling of parallel loops. The runtime system may implement a request combining technique in which a representative thread requests work on behalf of other threads. The request combining technique may be asynchronous; a thread may execute work while waiting to obtain additional work via the request combining technique. Loops can be nested within one another, and the runtime system may provide control over the way in which hardware contexts are allocated to the loops at the different levels. An “inside out” approach may be used for nested loops in which a loop indicates how many levels are nested inside it, rather than a conventional “outside in” approach to nesting.
Abstract:
Transactional Lock Elision allows hardware transactions to execute unmodified critical sections protected by the same lock concurrently, by subscribing to the lock and verifying that it is available before committing the transaction. A “lazy subscription” optimization, which delays lock subscription, can potentially cause behavior that cannot occur when the critical sections are executed under the lock. Hardware extensions may provide mechanisms to ensure that lazy subscriptions are safe (e.g., that they result in correct behavior). Prior to executing a critical section transactionally, its lock and subscription code may be identified (e.g., by writing their locations to special registers). Prior to committing the transaction, the thread executing the critical section may verify that the correct lock was correctly subscribed to. If not, or if locations identified by the special registers have been modified, the transaction may be aborted. Nested critical sections associated with different lock types may invoke different subscription code.