-
公开(公告)号:US10664405B2
公开(公告)日:2020-05-26
申请号:US15803416
申请日:2017-11-03
申请人: Google LLC
发明人: Asa Briggs
IPC分类号: G06F12/0871 , G06F12/0873 , G06F12/02 , G06F12/126 , G06F12/14
摘要: A method for an in-memory distributed cache includes receiving a write request from a client device to write a block of client data in random access memory (RAM) of a memory host and determining whether to allow the write request by determining whether the client device has permission to write the block of client data at the memory host, determining whether the block of client data is currently saved at the memory host, and determining whether a free block of RAM is available. When the client device has permission to write the block of client data at the memory host, the block of client data is not currently saved at the memory host, and a free block of RAM is available, the write request is allowed and the client is allowed to write the block of client data to the free block of RAM.
-
公开(公告)号:US10649673B2
公开(公告)日:2020-05-12
申请号:US16383776
申请日:2019-04-15
IPC分类号: G06F3/06 , G06F16/2455 , G06F16/23 , G06F12/14 , G06F12/0873 , G06F12/0831 , G06F11/14
摘要: Embodiments of the present disclosure may relate to methods and a computer program product for allowing writes based on a granularity level. The method for a storage server may include receiving a received granularity level for a particular volume of a storage device of a client computer including an effective duration for the received granularity level. The method may include receiving an anticipated write to the particular volume at an anticipated write granularity level. The method may include verifying whether the anticipated write granularity level substantially matches the received granularity level at the effective duration. The method may also include writing, in response to the anticipated write granularity level substantially matching the received granularity level at the effective duration, the anticipated write to the particular volume for the received granularity level.
-
73.
公开(公告)号:US20200026655A1
公开(公告)日:2020-01-23
申请号:US16586251
申请日:2019-09-27
申请人: Intel Corporation
发明人: Zhe WANG , Alaa R. Alameldeen , Yi Zou , Gordon King
IPC分类号: G06F12/0873 , G06F12/0811 , G06F12/0897 , G06F12/02 , G06F13/16
摘要: An apparatus is described. The apparatus includes a memory controller to interface with a multi-level memory, where, an upper level of the multi-level memory is to act as a cache for a lower level of the multi-level memory. The memory controller has circuitry to determine: i) an original address of a slot in the upper level of memory from an address of a memory request in a direct mapped fashion; ii) a miss in the cache for the request because the slot is pinned with data from another address that competes with the address; iii) a partner slot of the slot in the cache in response to the miss; iv) whether there is a hit or miss in the partner slot in the cache for the request.
-
公开(公告)号:US10528475B2
公开(公告)日:2020-01-07
申请号:US15711145
申请日:2017-09-21
IPC分类号: G06F12/0871 , G06F3/06 , G06F12/0873
摘要: A dynamic premigration protocol is implemented in response to a secondary tier returning to an operational state and an amount of data associated with a premigration queue of a primary tier exceeding a first threshold. The dynamic premigration protocol can comprise at least a temporary premigration throttling level. An original premigration protocol is implemented in response to an amount of data associated with the premigration queue decreasing below the first threshold.
-
公开(公告)号:US20190258579A1
公开(公告)日:2019-08-22
申请号:US15898407
申请日:2018-02-16
发明人: Omar CAREY , Rajsekhar DAS
IPC分类号: G06F12/0873 , G06F12/0817 , G06F12/0808 , G06F9/38
CPC分类号: G06F12/0238 , G06F2212/1016 , G06F2212/45 , G06F2212/7207
摘要: A cluster allocation bitmap determines which clusters in a band of storage remain unallocated. However, concurrent access to a cluster allocation bitmap can cause CPU stalls as copies of the cluster allocation bitmap in a CPU's level 1 (L1) cache are invalidated by another CPU allocating from the same bitmap. In one embodiment, cluster allocation bitmaps are divided into L1 cache line sized and aligned chunks. Each core of a multicore CPU is directed at random to allocate space out of a chunk. Because the chunks are L1 cache line aligned, the odds of the same portion of the cluster allocation bitmap being loaded into multiple L1 caches by multiple CPU cores is reduced, reducing the odds of an L1 cache invalidation. The number of CPU cores performing allocations on a given cluster allocation bitmap is limited based on the number of chunks with unallocated space that remain.
-
公开(公告)号:US20190243763A1
公开(公告)日:2019-08-08
申请号:US16390334
申请日:2019-04-22
发明人: Robert Birke , Yiyu Chen , Navaneeth Rameshan , Martin Schmatz
IPC分类号: G06F12/0831 , G06F12/0842 , G06F12/0804 , G06F11/34 , G06F12/0877 , G06F12/0873 , G06F9/455 , G06F12/0868 , G06F12/0871
摘要: A method for coordinating cache and memory reservation in a computerized system includes identifying at least one running application, recognizing the at least one application as a latency-critical application, monitoring information associated with a current cache access rate and a required memory bandwidth of the at least one application, allocating a cache partition, a size of the cache partition corresponds to the cache access rate and the required memory bandwidth of the at least one application, defining a threshold value including a number of cache misses per time unit, determining a reduction of cache misses per time unit, in response to the reduction of cache misses per time unit being above the threshold value, retaining the cache partition, assigning a priority of scheduling memory request including a medium priority level, and assigning a memory channel to the at least one application to avoid memory channel contention.
-
公开(公告)号:US20190220414A1
公开(公告)日:2019-07-18
申请号:US16247912
申请日:2019-01-15
申请人: Arm Limited
IPC分类号: G06F12/0864 , G06F12/0873 , G06F12/0815 , G06F12/0871 , G06F12/126
CPC分类号: G06F12/0864 , G06F12/0815 , G06F12/0871 , G06F12/0873 , G06F12/126 , G06F2212/6032
摘要: There is provided an apparatus that includes storage circuitry. The storage circuitry is made up from a plurality of sets, each of the sets having at least one storage location. Receiving circuitry receives an access request that includes an input address. Lookup circuitry obtains a plurality of candidate sets that correspond with an index part of the input address. The lookup circuitry determines a selected storage location from the candidate sets using an access policy. The access policy causes the lookup circuitry to iterate through the candidate sets to attempt to locate an appropriate storage location. The appropriate storage location is accessed in response to the appropriate storage location being found.
-
公开(公告)号:US10185666B2
公开(公告)日:2019-01-22
申请号:US14970034
申请日:2015-12-15
申请人: Facebook, Inc.
发明人: Jana van Greunen , Huapeng Zhou , Linpeng Tang
IPC分类号: G06F12/12 , G06F12/121 , G06F12/127 , G06F12/0873 , G06F12/0875
摘要: Several embodiments include a method of operating a cache appliance comprising a primary memory implementing an item-wise cache and a secondary memory implementing a block cache. The cache appliance can emulate item-wise storage and eviction in the block cache by maintaining, in the primary memory, sampling data items from the block cache. The sampled items can enable the cache appliance to represent a spectrum of retention priorities. When storing a pending data item into the block cache, a comparison of the pending data item with the sampled items can enable the cache appliance to identify where to insert a block containing the pending data item. When evicting a block from the block cache, a comparison of a data item in the block with at least one of the sampled items can enable the cache appliance to determine whether to recycle/retain the data item.
-
公开(公告)号:US20190018774A1
公开(公告)日:2019-01-17
申请号:US15647301
申请日:2017-07-12
发明人: Robert Birke , Yiyu Chen , Navaneeth Rameshan , Martin Schmatz
IPC分类号: G06F12/0831 , G06F12/0871 , G06F12/0804 , G06F12/0873 , G06F12/0877 , G06F12/0868
摘要: A method for coordinating cache and memory reservation in a computerized system includes identifying at least one running application, recognizing the at least one application as a latency-critical application, monitoring information associated with a current cache access rate and a required memory bandwidth of the at least one application, allocating a cache partition, a size of the cache partition corresponds to the cache access rate and the required memory bandwidth of the at least one application, defining a threshold value including a number of cache misses per time unit, determining a reduction of cache misses per time unit, in response to the reduction of cache misses per time unit being above the threshold value, retaining the cache partition, assigning a priority of scheduling memory request including a medium priority level, and assigning a memory channel to the at least one application to avoid memory channel contention.
-
公开(公告)号:US20190012082A1
公开(公告)日:2019-01-10
申请号:US16002393
申请日:2018-06-07
发明人: Amit MITKAR , Andrei EROFEEV
IPC分类号: G06F3/06 , G06F12/0871 , G06F12/0873
CPC分类号: G06F3/061 , G06F3/0616 , G06F3/065 , G06F3/0653 , G06F3/0655 , G06F3/0656 , G06F3/0659 , G06F3/0685 , G06F12/0871 , G06F12/0873 , G06F2212/1016 , G06F2212/1036 , G06F2212/281 , G06F2212/305
摘要: Systems and methods can implement one or more intelligent caching algorithms that reduce wear on the SSD and/or to improve caching performance. Such algorithms can improve storage utilization and I/O efficiency by taking into account the write-wearing limitations of the SSD. Accordingly, the systems and methods can cache to the SSD while avoiding writing too frequently to the SSD to increase or attempt to increase the lifespan of the SSD. The systems and methods may, for instance, write data to the SSD once that data has been read from the hard disk or memory multiple times to avoid or attempt to avoid writing data that has been read only once. The systems and methods may also write large chunks of data to the SSD at once instead of a single unit of data at a time. Further, the systems and methods can write to the SSD in a circular fashion.
-
-
-
-
-
-
-
-
-