Abstract:
A data object has a lock and a condition indicator associated with it. Based at least partly on detecting a first setting of the condition indicator, a reader stores an indication that the reader has obtained read access to the data object in an element of a readers structure and reads the data object without acquiring the lock. A writer detects the first setting and replaces it with a second setting, indicating that the lock is to be acquired by readers before reading the data object. Prior to performing a write on the data object, the writer verifies that one or more elements of the readers structure have been cleared.
Abstract:
Hardware transactional memory (HTM) systems may guarantee that transactions commit without falling back to non-speculative code paths. A transaction that fails to progress may enter a power mode, giving the transaction priority when it conflicts with non-power-mode transactions. If, during execution of a power-mode transaction, another thread attempts, using a non-power-mode transaction, to access a shared resource being accessed by the power-mode transaction, it may be determined whether any actual data conflict occurs between the two transactions. If no data conflict exists, both transactions may continue to completion. If, however, a data conflict does exist, the power-mode transaction may deny the other transaction access to the shared resource. HTM systems may, in some embodiments, ensure that only one power-mode transaction exists at a time. In other embodiments, multiple, concurrent, power-mode transactions may be supported while ensuring that they access disjoint data sets.
Abstract:
A computer system may recognize a busy-wait loop in program instructions at compile time and/or may recognize busy-wait looping behavior during execution of program instructions. The system may recognize that an exit condition for a busy-wait loop is specified by a conditional branch type instruction in the program instructions. In response to identifying the loop and the conditional branch type instruction that specifies its exit condition, the system may influence or override a prediction made by a dynamic branch predictor, resulting in a prediction that the exit condition will be met and that the loop will be exited regardless of any observed branch behavior for the conditional branch type instruction. The looping instructions may implement waiting for an inter-thread communication event to occur or for a lock to become available. When the exit condition is met, the loop may be exited without incurring a misprediction delay.
Abstract:
NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.
Abstract:
Generic Concurrency Restriction (GCR) may divide a set of threads waiting to acquire a lock into two sets: an active set currently able to contend for the lock, and a passive set waiting for an opportunity to join the active set and contend for the lock. The number of threads in the active set may be limited to a predefined maximum or even a single thread. Generic Concurrency Restriction may be implemented as a wrapper around an existing lock implementation. Generic Concurrency Restriction may, in some embodiments, be unfair (e.g., to some threads) over the short term, but may improve the overall throughput of the underlying multithreaded application via passivation of a portion of the waiting threads.
Abstract:
Transactional reader-writer locks may leverage available hardware transactional memory (HTM) to simplify the procedures of the reader-writer lock algorithm and to eliminate a requirement for type stable memory An HTM-based reader-writer lock may include an ordered list of client-provided nodes, each of which represents a thread that holds (or desires to acquire) the lock, and a tail pointer. The locking and unlocking procedures invoked by readers and writers may access the tail pointer or particular ones of the nodes in the list using various combinations of transactions and non-transactional accesses to insert nodes into the list or to remove nodes from the list. A reader or writer that owns a node at the head of the list (or a reader whose node is preceded in the list only by other readers' nodes) may access a critical section of code or shared resource.
Abstract:
Particular techniques for improving the scalability of concurrent programs (e.g., lock-based applications) may be effective in some environments and for some workloads, but not others. The systems described herein may automatically choose appropriate ones of these techniques to apply when executing lock-based applications at runtime, based on observations of the application in the current environment and with the current workload. In one example, two techniques for improving lock scalability (e.g., transactional lock elision using hardware transactional memory, and optimistic software techniques) may be integrated together. A lightweight runtime library built for this purpose may adapt its approach to managing concurrency by dynamically selecting one or more of these techniques (at different times) during execution of a given application. In this Adaptive Lock Elision approach, the techniques may be selected (based on pluggable policies) at runtime to achieve good performance on different platforms and for different workloads.
Abstract:
A concurrency-restricting lock may divide a set of threads waiting to acquire the lock into an active circulating set (ACS) that contends for the lock, and a passive set (PS) that awaits an opportunity to contend for the lock. The lock, which may include multiple constituent lock types, lists, or queues, may be unfair over the short term, but improve throughput of the underlying multithreaded application. Culling and long-term fairness policies may be applied to the lock to move excess threads from the ACS to the PS or promote threads from the PS to the ACS. These policies may constraint the size or distribution of threads in the ACS (which may be NUMA-aware). A waiting policy may avoid aggressive promotion from the PS to the ACS, and a short-term fairness policy may move a thread from the tail of a list or queue to its head.
Abstract:
Transactional reader-writer locks may leverage available hardware transactional memory (HTM) to simplify the procedures of the reader-writer lock algorithm and to eliminate a requirement for type stable memory An HTM-based reader-writer lock may include an ordered list of client-provided nodes, each of which represents a thread that holds (or desires to acquire) the lock, and a tail pointer. The locking and unlocking procedures invoked by readers and writers may access the tail pointer or particular ones of the nodes in the list using various combinations of transactions and non-transactional accesses to insert nodes into the list or to remove nodes from the list. A reader or writer that owns a node at the head of the list (or a reader whose node is preceded in the list only by other readers' nodes) may access a critical section of code or shared resource.
Abstract:
The systems and methods described herein may be used to implement scalable statistics counters suitable for use in systems that employ a NUMA style memory architecture. The counters may be implemented as data structures that include a count value portion and a node identifier portion. The counters may be accessible within transactions. The node identifier portion may identify a node on which a thread that most recently incremented the counter was executing or one on which a thread that has requested priority to increment the shared counter was executing. Threads executing on identified nodes may have higher priority to increment the counter than other threads. Threads executing on other nodes may delay their attempts to increment the counter, thus encouraging consecutive updates from threads on a single node. Impatient threads may attempt to update the node identifier portion or may update an anti-starvation variable to indicate a request for priority.