Abstract:
A first data accessor acquires a lock associated with a critical section. The first data accessor initiates a help session associated with a first operation of the critical section. In the help session, a second data accessor (which has not acquired the first lock) performs one or more sub-operations of the first operation. The first data accessor releases the lock after at least the first operation has been completed.
Abstract:
We teach a powerful approach that greatly simplifies the design of non-blocking mechanisms and data structures, in part by, largely separate the issues of correctness and progress. At a high level, our methodology includes designing an “obstruction-free” implementation of the desired mechanism or data structure, which may then be combined with a contention management mechanism whose role is to facilitate the conditions under which progress of the obstruction-free implementation is assured. In general, the contention management mechanism is separable semantically from an obstruction-free concurrent shared/sharable object implementation to which it is/may be applied. In some cases, the contention management mechanism may actually be coded separately from the obstruction-free implementation. We elaborate herein on the notions of obstruction-freedom and contention management, and various possibilities for combining the two. In addition, we include description of some exemplary applications to particular concurrent software mechanisms and data structure implementations.
Abstract:
Systems and methods for implementing constrained data-driven parallelism may provide programmers with mechanisms for controlling the execution order and/or interleaving of tasks spawned during execution. For example, a programmer may define a task group that includes a single task, and the single task may define a direct or indirect trigger that causes another task to be spawned (e.g., in response to a modification of data specified in the trigger). Tasks spawned by a given task may be added to the same task group as the given task. A deferred keyword may control whether a spawned task is to be executed in the current execution phase or its execution is to be deferred to a subsequent execution phase for the task group. Execution of all tasks executing in the current execution phase may need to be complete before the execution of tasks in the next phase can begin.
Abstract:
Transactional reader-writer locks may leverage available hardware transactional memory (HTM) to simplify the procedures of the reader-writer lock algorithm and to eliminate a requirement for type stable memory An HTM-based reader-writer lock may include an ordered list of client-provided nodes, each of which represents a thread that holds (or desires to acquire) the lock, and a tail pointer. The locking and unlocking procedures invoked by readers and writers may access the tail pointer or particular ones of the nodes in the list using various combinations of transactions and non-transactional accesses to insert nodes into the list or to remove nodes from the list. A reader or writer that owns a node at the head of the list (or a reader whose node is preceded in the list only by other readers' nodes) may access a critical section of code or shared resource.
Abstract:
Transactional reader-writer locks may leverage available hardware transactional memory (HTM) to simplify the procedures of the reader-writer lock algorithm and to eliminate a requirement for type stable memory An HTM-based reader-writer lock may include an ordered list of client-provided nodes, each of which represents a thread that holds (or desires to acquire) the lock, and a tail pointer. The locking and unlocking procedures invoked by readers and writers may access the tail pointer or particular ones of the nodes in the list using various combinations of transactions and non-transactional accesses to insert nodes into the list or to remove nodes from the list. A reader or writer that owns a node at the head of the list (or a reader whose node is preceded in the list only by other readers' nodes) may access a critical section of code or shared resource.
Abstract:
Socket scheduling modes may prevent non-uniform memory access effects from negatively affecting performance of synchronization mechanisms utilizing hardware transactional memory. Each mode may indicate whether a thread may execute a critical section on a particular socket. For example, under transitional lock elision, locks may include a mode indicating whether threads may acquire or elide the lock on a particular socket. Different modes may be used alternately to prevent threads from starving. A thread may only execute a critical section on a particular socket if allowed by the current mode. Otherwise, threads may block until allowed to execute the critical section, such as after the current mode changes. A profiling session may, for a running workload, iterate over all possible modes, measuring statistics pertaining to the execution of critical sections (e.g., the number of lock acquisitions and/or elisions), to determine the best performing modes for the particular workload.
Abstract:
Transactional reader-writer locks may leverage available hardware transactional memory (HTM) to simplify the procedures of the reader-writer lock algorithm and to eliminate a requirement for type stable memory An HTM-based reader-writer lock may include an ordered list of client-provided nodes, each of which represents a thread that holds (or desires to acquire) the lock, and a tail pointer. The locking and unlocking procedures invoked by readers and writers may access the tail pointer or particular ones of the nodes in the list using various combinations of transactions and non-transactional accesses to insert nodes into the list or to remove nodes from the list. A reader or writer that owns a node at the head of the list (or a reader whose node is preceded in the list only by other readers' nodes) may access a critical section of code or shared resource.
Abstract:
We teach a powerful approach that greatly simplifies the design of non-blocking mechanisms and data structures, in part by, largely separate the issues of correctness and progress. At a high level, our methodology includes designing an “obstruction-free” implementation of the desired mechanism or data structure, which may then be combined with a contention management mechanism whose role is to facilitate the conditions under which progress of the obstruction-free implementation is assured. In general, the contention management mechanism is separable semantically from an obstruction-free concurrent shared/sharable object implementation to which it is/may be applied. In some cases, the contention management mechanism may actually be coded separately from the obstruction-free implementation. We elaborate herein on the notions of obstruction-freedom and contention management, and various possibilities for combining the two. In addition, we include description of some exemplary applications to particular concurrent software mechanisms and data structure implementations.
Abstract:
Transactional reader-writer locks may leverage available hardware transactional memory (HTM) to simplify the procedures of the reader-writer lock algorithm and to eliminate a requirement for type stable memory An HTM-based reader-writer lock may include an ordered list of client-provided nodes, each of which represents a thread that holds (or desires to acquire) the lock, and a tail pointer. The locking and unlocking procedures invoked by readers and writers may access the tail pointer or particular ones of the nodes in the list using various combinations of transactions and non-transactional accesses to insert nodes into the list or to remove nodes from the list. A reader or writer that owns a node at the head of the list (or a reader whose node is preceded in the list only by other readers' nodes) may access a critical section of code or shared resource.
Abstract:
A first data accessor acquires a lock associated with a critical section. The first data accessor initiates a help session associated with a first operation of the critical section. In the help session, a second data accessor (which has not acquired the first lock) performs one or more sub-operations of the first operation. The first data accessor releases the lock after at least the first operation has been completed.