Abstract:
For each of multiple processes executing in parallel, as long as corresponding version information associated with a respective set of one or more shared variables used for computational purposes has not changed during execution of a respective transaction, results of the respective transaction can be globally committed to memory without causing data corruption. If version information associated with one or more respective shared variables (used to produce the transaction results) happens to change during a process of generating respective results, then a respective process can identify that another process modified the one or more respective shared variables during execution and that its transaction results should not be committed to memory. In this latter case, the transaction repeats itself until it is able to commit respective results without causing data corruption.
Abstract:
A computer system includes multiple processing threads that execute in parallel. The multiple processing threads have access to a global environment including different types of metadata enabling the processing threads to carry out simultaneous execution depending on a currently selected type of lock mode. A mode controller monitoring the processing threads initiates switching from one type of lock mode to another depending on current operating conditions such as an amount of contention amongst the multiple processing threads to modify the shared data. The mode controller can switch from one lock mode another regardless of whether any of the multiple processes are in the midst of executing a respective transaction. A most efficient lock mode can be selected to carry out the parallel transactions. In certain cases, switching of lock modes causes one or more of the processing threads to abort and retry a respective transaction according to the new mode.
Abstract:
Cache logic associated with a respective one of multiple processing threads executing in parallel updates corresponding data fields of a cache to uniquely mark its contents. The marked contents represent a respective read set for a transaction. For example, at an outset of executing a transaction, a respective processing thread chooses a data value to mark contents of the cache used for producing a transaction outcome for the processing thread. Upon each read of shared data from main memory, the cache stores a copy of the data and marks it as being used during execution of the processing thread. If uniquely marked contents of a respective cache line happen to be displaced (e.g., overwritten) during execution of a processing thread, then the transaction is aborted (rather than being committed to main memory) because there is a possibility that another transaction overwrote a shared data value used during the respective transaction.
Abstract:
A technique for accessing a shared resource of a computerized system involves running a first portion of a first thread within the computerized system, the first portion (i) requesting a lock on the shared resource and (ii) directing the computerized system to make operations of a second thread visible in a correct order. The technique further involves making operations of the second thread visible in the correct order in response to the first portion of the first thread running within the computerized system, and running a second portion of the first thread within the computerized system to determine whether the first thread has obtained the lock on the shared resource. Such a technique alleviates the need for using a MEMBAR instruction in the second thread.
Abstract:
An operating system call control subsystem is disclosed for use in a computer that includes a processor for processing a program, the program instructions of an operating system call instruction type identifying one of a plurality of types of operating system calls, each type of operating system call being associable with an operating system call type identifier value within a predetermined range of values. The operating system call control subsystem comprises a crossover table, an operating system call instruction type address resolution module, and an operating system call instruction type processing module. The crossover table has a number of entries corresponding to a predetermined fraction of the predetermined range, each entry in the crossover table having an instruction for enabling the processor to save a value corresponding to an offset of the entry into the crossover table. The operating system call instruction type address resolution module provides the instructions of the operating system call instruction type with respective target addresses that include an operating system call set identifier in a set of operating system call set identifiers, the number of operating system call set identifiers multiplied by the number of crossover table entries corresponding to the predetermined range and an offset value corresponding to an offset to an entry into the crossover table. The operating system call instruction type processing module, in response to the processor processing an instruction of the operating system call instruction type, (a) saves the operating system call set identifier from the target address, (b) selects one of the entries in the crossover table using the offset value of the target address, (c) processes the instruction from the selected entry of the crossover table to save the value corresponding to the offset of the selected entry in the crossover table, and (d) generates the operating system call type identifier value in connection with the saved operating system call set identifier and the saved value corresponding to the offset of the selected entry in the crossover table.
Abstract:
NUMA-aware reader-writer locks may leverage lock cohorting techniques to band together writer requests from a single NUMA node. The locks may relax the order in which the lock schedules the execution of critical sections of code by reader threads and writer threads, allowing lock ownership to remain resident on a single NUMA node for long periods, while also taking advantage of parallelism between reader threads. Threads may contend on node-level structures to get permission to acquire a globally shared reader-writer lock. Writer threads may follow a lock cohorting strategy of passing ownership of the lock in write mode from one thread to a cohort writer thread without releasing the shared lock, while reader threads from multiple NUMA nodes may simultaneously acquire the shared lock in read mode. The reader-writer lock may follow a writer-preference policy, a reader-preference policy or a hybrid policy.
Abstract:
Transactional Lock Elision (TLE) may allow multiple threads to concurrently execute critical sections as speculative transactions. Transactions may abort due to various reasons. To avoid starvation, transactions may revert to execution using mutual exclusion when transactional execution fails. Because threads may revert to mutual exclusion in response to the mutual exclusion of other threads, a positive feedback loop may form in times of high congestion, causing a “lemming effect”. To regain the benefits of concurrent transactional execution, the system may allow one or more threads awaiting a given lock to be released from the wait queue and instead attempt transactional execution. A gang release may allow a subset of waiting threads to be released simultaneously. The subset may be chosen dependent on the number of waiting threads, historical abort relationships between threads, analysis of transactions of each thread, sensitivity of each thread to abort, and/or other thread-local or global criteria.
Abstract:
The systems and methods described herein may enable a processor core to run at higher speeds than other processor cores in the same package. A thread executing on one processor core may begin waiting for another thread to complete a particular action (e.g., to release a lock). In response to determining that other threads are waiting, the thread/core may enter an inactive state. A data structure may store information indicating which threads are waiting on which other threads. In response to determining that a quorum of threads/cores are in an inactive state, one of the threads/cores may enter a turbo mode in which it executes at a higher speed than the baseline speed for the cores. A thread holding a lock and executing in turbo mode may perform work delegated by waiting threads at the higher speed. A thread may exit the inactive state when the waited-for action is completed.
Abstract:
The system and methods described herein may be used to implement NUMA-aware locks that employ lock cohorting. These lock cohorting techniques may reduce the rate of lock migration by relaxing the order in which the lock schedules the execution of critical code sections by various threads, allowing lock ownership to remain resident on a single NUMA node longer than under strict FIFO ordering, thus reducing coherence traffic and improving aggregate performance. A NUMA-aware cohort lock may include a global shared lock that is thread-oblivious, and multiple node-level locks that provide cohort detection. The lock may be constructed from non-NUMA-aware components (e.g., spin-locks or queue locks) that are modified to provide thread-obliviousness and/or cohort detection. Lock ownership may be passed from one thread that holds the lock to another thread executing on the same NUMA node without releasing the global shared lock.
Abstract:
A method, system, and medium are disclosed for facilitating communication between multiple concurrent threads of execution using a multi-lane concurrent bag. The bag comprises a plurality of independently-accessible concurrent intermediaries (lanes) that are each configured to store data elements. The bag provides an insert function executable to insert a given data element into the bag by selecting one of the intermediaries and inserting the data element into the selected intermediary. The bag also provides a consume function executable to consume a data element from the bag by choosing one of the intermediaries and consuming (removing and returning) a data element stored in the chosen intermediary. The bag guarantees that execution of the consume function consumes a data element if the bag is non-empty and permits multiple threads to execute the insert or consume functions concurrently.