LOW-LATENCY SHARED MEMORY CHANNEL ACROSS ADDRESS SPACES WITHOUT SYSTEM CALL OVERHEAD IN A COMPUTING SYSTEM

    公开(公告)号:US20220129175A1

    公开(公告)日:2022-04-28

    申请号:US17647291

    申请日:2022-01-06

    Applicant: VMware, Inc.

    Abstract: Examples provide a method of communication between a client application and a filesystem server in a virtualized computing system. The client application executes in a virtual machine (VM) and the filesystem server executes in a hypervisor. The method includes: allocating, by the client application, first shared memory in a guest virtual address space of the client application; creating a guest application shared memory channel between the client application and the filesystem server upon request by the client application to a driver in the VM, the driver in communication with the filesystem server, the guest application shared memory channel using the first shared memory; sending authentication information associated with the client application to the filesystem server to create cached authentication information at the filesystem server; and submitting a command in the guest application shared memory channel from the client application to the filesystem server, the command including the authentication information.

    DYNAMIC GROWTH OF DATA CACHES USING BACKGROUND PROCESSES FOR HASH BUCKET GROWTH

    公开(公告)号:US20240070080A1

    公开(公告)日:2024-02-29

    申请号:US17900642

    申请日:2022-08-31

    Applicant: VMware, Inc.

    CPC classification number: G06F12/0864 G06F2212/1016 G06F2212/604

    Abstract: The disclosure describes growing a data cache using a background hash bucket growth process. A first memory portion is allocated to the data buffer of the data cache and a second memory portion is allocated to the metadata buffer of the data cache based on the cache growth instruction. The quantity of hash buckets in the hash bucket buffer is increased and the background hash bucket growth process is initiated, wherein the process is configured to rehash hash bucket entries of the hash bucket buffer in the increased quantity of hash buckets. A data entry is stored in the data buffer using the allocated first memory portion of the data cache and metadata associated with the data entry is stored using the allocated second memory portion of the metadata buffer, wherein a hash bucket entry associated with the data entry is stored in the increased quantity of hash buckets.

    SYSTEM AND METHOD OF A HIGHLY CONCURRENT CACHE REPLACEMENT ALGORITHM

    公开(公告)号:US20210141728A1

    公开(公告)日:2021-05-13

    申请号:US16679570

    申请日:2019-11-11

    Applicant: VMware, Inc.

    Abstract: Disclosed are a method and system for managing multi-threaded concurrent access to a cache data structure. The cache data structure includes a hash table and three queues. The hash table includes a list of elements for each hash bucket with each hash bucket containing a mutex object and elements in each of the queues containing lock objects. Multiple threads can each lock a different hash bucket to have access to the list, and multiple threads can each lock a different element in the queues. The locks permit highly concurrent access to the cache data structure without conflict. Also, atomic operations are used to obtain pointers to elements in the queues so that a thread can safely advance each pointer. Race conditions that are encountered with locking an element in the queues or entering an element into the hash table are detected, and the operation encountering the race condition is retried.

Patent Agency Ranking