Systems and Methods for Performing Concurrency Restriction and Throttling over Contended Locks

    公开(公告)号:US20250077309A1

    公开(公告)日:2025-03-06

    申请号:US18954407

    申请日:2024-11-20

    Inventor: David Dice

    Abstract: A concurrency-restricting lock may divide a set of threads waiting to acquire the lock into an active circulating set (ACS) that contends for the lock, and a passive set (PS) that awaits an opportunity to contend for the lock. The lock, which may include multiple constituent lock types, lists, or queues, may be unfair over the short term, but improve throughput of the underlying multithreaded application. Culling and long-term fairness policies may be applied to the lock to move excess threads from the ACS to the PS or promote threads from the PS to the ACS. These policies may constraint the size or distribution of threads in the ACS (which may be NUMA-aware). A waiting policy may avoid aggressive promotion from the PS to the ACS, and a short-term fairness policy may move a thread from the tail of a list or queue to its head.

    SCALABLE RANGE LOCKS
    2.
    发明申请

    公开(公告)号:US20240378242A1

    公开(公告)日:2024-11-14

    申请号:US18780266

    申请日:2024-07-22

    Abstract: A computer comprising one or more processors and memory may implement multiple threads performing mutually exclusive lock acquisition operations on disjoint ranges of a shared resource each using atomic compare and swap (CAS) operations. A linked list of currently locked ranges is maintained and, upon entry to a lock acquisition operation, a thread waits for all locked ranges overlapping the desired range to be released then inserts a descriptor for the desired range into the linked list using a single CAS operation. To release a locked range, a thread executes a single fetch and add (FAA) operation. The operation may be extended to support simultaneous exclusive and non-exclusive access by allowing overlapping ranges to be locked for non-exclusive access and by performing an additional validation after locking to provide conflict resolution should a conflict be detected.

    GENERIC CONCURRENCY RESTRICTION
    3.
    发明公开

    公开(公告)号:US20240345901A1

    公开(公告)日:2024-10-17

    申请号:US18757299

    申请日:2024-06-27

    CPC classification number: G06F9/52 G06F9/5022 G06F9/524

    Abstract: Generic Concurrency Restriction (GCR) may divide a set of threads waiting to acquire a lock into two sets: an active set currently able to contend for the lock, and a passive set waiting for an opportunity to join the active set and contend for the lock. The number of threads in the active set may be limited to a predefined maximum or even a single thread. Generic Concurrency Restriction may be implemented as a wrapper around an existing lock implementation. Generic Concurrency Restriction may, in some embodiments, be unfair (e.g., to some threads) over the short term, but may improve the overall throughput of the underlying multithreaded application via passivation of a portion of the waiting threads.

    Generic Concurrency Restriction
    6.
    发明公开

    公开(公告)号:US20230333916A1

    公开(公告)日:2023-10-19

    申请号:US18341588

    申请日:2023-06-26

    CPC classification number: G06F9/52 G06F9/5022 G06F9/524

    Abstract: Generic Concurrency Restriction (GCR) may divide a set of threads waiting to acquire a lock into two sets: an active set currently able to contend for the lock, and a passive set waiting for an opportunity to join the active set and contend for the lock. The number of threads in the active set may be limited to a predefined maximum or even a single thread. Generic Concurrency Restriction may be implemented as a wrapper around an existing lock implementation. Generic Concurrency Restriction may, in some embodiments, be unfair (e.g., to some threads) over the short term, but may improve the overall throughput of the underlying multithreaded application via passivation of a portion of the waiting threads.

    Ticket Locks with Enhanced Waiting

    公开(公告)号:US20220374287A1

    公开(公告)日:2022-11-24

    申请号:US17817854

    申请日:2022-08-05

    Abstract: A computer comprising one or more processors and memory may implement multiple threads that perform a lock operation using a data structure comprising an allocation field and a grant field. Upon entry to a lock operation, a thread allocates a ticket by atomically copying a ticket value contained in the allocation field and incrementing the allocation field. The thread compares the allocated ticket to the grant field. If they are unequal, the thread determines a number of waiting threads. If the number is above the threshold, the thread enters a long term wait operation comprising determining a location for long term wait value and waiting on changes to that value. If the number is below the threshold or the long term wait operation is complete, the thread waits for the grant value to equal the ticket to indicate that the lock is allocated.

    Compact and Scalable Mutual Exclusion

    公开(公告)号:US20220253339A1

    公开(公告)日:2022-08-11

    申请号:US17169246

    申请日:2021-02-05

    Abstract: Compact and scalable mutual exclusion techniques are implemented for multiple executing threads. A thread may acquire a lock by swapping a pointer to the thread into a tail field of a lock data structure. If the swap operation returned a null value, then the lock is acquired. If the swap operation does not return a null value, then the thread may wait to obtain the lock from a predecessor thread. The thread may wait until a grant field in a data structure for the predecessor thread stores a pointer to the lock, signaling to the thread that the thread may acquire the lock.

    Systems and Methods for Performing Concurrency Restriction and Throttling over Contended Locks

    公开(公告)号:US20220214930A1

    公开(公告)日:2022-07-07

    申请号:US17701302

    申请日:2022-03-22

    Inventor: David Dice

    Abstract: A concurrency-restricting lock may divide a set of threads waiting to acquire the lock into an active circulating set (ACS) that contends for the lock, and a passive set (PS) that awaits an opportunity to contend for the lock. The lock, which may include multiple constituent lock types, lists, or queues, may be unfair over the short term, but improve throughput of the underlying multithreaded application. Culling and long-term fairness policies may be applied to the lock to move excess threads from the ACS to the PS or promote threads from the PS to the ACS. These policies may constraint the size or distribution of threads in the ACS (which may be NUMA-aware). A waiting policy may avoid aggressive promotion from the PS to the ACS, and a short-term fairness policy may move a thread from the tail of a list or queue to its head.

Patent Agency Ranking