In-memory distributed cache
    71.
    发明授权

    公开(公告)号:US10664405B2

    公开(公告)日:2020-05-26

    申请号:US15803416

    申请日:2017-11-03

    申请人: Google LLC

    发明人: Asa Briggs

    摘要: A method for an in-memory distributed cache includes receiving a write request from a client device to write a block of client data in random access memory (RAM) of a memory host and determining whether to allow the write request by determining whether the client device has permission to write the block of client data at the memory host, determining whether the block of client data is currently saved at the memory host, and determining whether a free block of RAM is available. When the client device has permission to write the block of client data at the memory host, the block of client data is not currently saved at the memory host, and a free block of RAM is available, the write request is allowed and the client is allowed to write the block of client data to the free block of RAM.

    Queuing read requests based on write requests

    公开(公告)号:US10649673B2

    公开(公告)日:2020-05-12

    申请号:US16383776

    申请日:2019-04-15

    摘要: Embodiments of the present disclosure may relate to methods and a computer program product for allowing writes based on a granularity level. The method for a storage server may include receiving a received granularity level for a particular volume of a storage device of a client computer including an effective duration for the received granularity level. The method may include receiving an anticipated write to the particular volume at an anticipated write granularity level. The method may include verifying whether the anticipated write granularity level substantially matches the received granularity level at the effective duration. The method may also include writing, in response to the anticipated write granularity level substantially matching the received granularity level at the effective duration, the anticipated write to the particular volume for the received granularity level.

    CENTRAL PROCESSING UNIT CACHE FRIENDLY MULTITHREADED ALLOCATION

    公开(公告)号:US20190258579A1

    公开(公告)日:2019-08-22

    申请号:US15898407

    申请日:2018-02-16

    摘要: A cluster allocation bitmap determines which clusters in a band of storage remain unallocated. However, concurrent access to a cluster allocation bitmap can cause CPU stalls as copies of the cluster allocation bitmap in a CPU's level 1 (L1) cache are invalidated by another CPU allocating from the same bitmap. In one embodiment, cluster allocation bitmaps are divided into L1 cache line sized and aligned chunks. Each core of a multicore CPU is directed at random to allocate space out of a chunk. Because the chunks are L1 cache line aligned, the odds of the same portion of the cluster allocation bitmap being loaded into multiple L1 caches by multiple CPU cores is reduced, reducing the odds of an L1 cache invalidation. The number of CPU cores performing allocations on a given cluster allocation bitmap is limited based on the number of chunks with unallocated space that remain.

    COORDINATION OF CACHE AND MEMORY RESERVATION
    76.
    发明申请

    公开(公告)号:US20190243763A1

    公开(公告)日:2019-08-08

    申请号:US16390334

    申请日:2019-04-22

    摘要: A method for coordinating cache and memory reservation in a computerized system includes identifying at least one running application, recognizing the at least one application as a latency-critical application, monitoring information associated with a current cache access rate and a required memory bandwidth of the at least one application, allocating a cache partition, a size of the cache partition corresponds to the cache access rate and the required memory bandwidth of the at least one application, defining a threshold value including a number of cache misses per time unit, determining a reduction of cache misses per time unit, in response to the reduction of cache misses per time unit being above the threshold value, retaining the cache partition, assigning a priority of scheduling memory request including a medium priority level, and assigning a memory channel to the at least one application to avoid memory channel contention.

    Item-wise simulation in a block cache where data eviction places data into comparable score in comparable section in the block cache

    公开(公告)号:US10185666B2

    公开(公告)日:2019-01-22

    申请号:US14970034

    申请日:2015-12-15

    申请人: Facebook, Inc.

    摘要: Several embodiments include a method of operating a cache appliance comprising a primary memory implementing an item-wise cache and a secondary memory implementing a block cache. The cache appliance can emulate item-wise storage and eviction in the block cache by maintaining, in the primary memory, sampling data items from the block cache. The sampled items can enable the cache appliance to represent a spectrum of retention priorities. When storing a pending data item into the block cache, a comparison of the pending data item with the sampled items can enable the cache appliance to identify where to insert a block containing the pending data item. When evicting a block from the block cache, a comparison of a data item in the block with at least one of the sampled items can enable the cache appliance to determine whether to recycle/retain the data item.

    COORDINATION OF CACHE AND MEMORY RESERVATION
    79.
    发明申请

    公开(公告)号:US20190018774A1

    公开(公告)日:2019-01-17

    申请号:US15647301

    申请日:2017-07-12

    摘要: A method for coordinating cache and memory reservation in a computerized system includes identifying at least one running application, recognizing the at least one application as a latency-critical application, monitoring information associated with a current cache access rate and a required memory bandwidth of the at least one application, allocating a cache partition, a size of the cache partition corresponds to the cache access rate and the required memory bandwidth of the at least one application, defining a threshold value including a number of cache misses per time unit, determining a reduction of cache misses per time unit, in response to the reduction of cache misses per time unit being above the threshold value, retaining the cache partition, assigning a priority of scheduling memory request including a medium priority level, and assigning a memory channel to the at least one application to avoid memory channel contention.