-
公开(公告)号:US11341044B2
公开(公告)日:2022-05-24
申请号:US16926336
申请日:2020-07-10
Applicant: VMware, Inc.
Inventor: Pradeep Krishnamurthy , Prasanna Aithal , Asit Desai , Bryan Branstetter , Mahesh S. Hiregoudar , Prasad Rao Jangam , Rohan Pasalkar , Srinivasa Shantharam , Raghavan Pichai
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reclaiming one or more portions of storage resources in a computer system serving one or more virtual computing instances, where the storage resources in the computer system are organized in clusters of storage blocks. In one aspect, a method includes maintaining a respective block tracking value for each storage block that indicates whether a call to reclaim the storage block is outstanding; determining, from the block tracking values, a respective cluster priority value for each of the clusters based on a count of storage blocks in the respective cluster for which a call to reclaim is outstanding; and reclaiming a first portion of storage resources in the computer system in accordance with the cluster priority values.
-
公开(公告)号:US20210294515A1
公开(公告)日:2021-09-23
申请号:US16878626
申请日:2020-05-20
Applicant: VMWARE, INC.
Inventor: GURUDUTT KUMAR , Pradeep Krishnamurthy , Prasanth Jose , Vivek Patidar
IPC: G06F3/06
Abstract: The disclosure supports both trickle and burst input/output (I/O) admission rates in journaling file systems. Examples include receiving incoming data; based at least on receiving the incoming data, generating metadata for a journal entry; adding the metadata to an active metadata batch; issuing a data write to write the incoming data to a storage medium; monitoring for a first trigger comprising determining that a data write for an entry in the active metadata batch is complete; based at least on the first trigger, closing the active metadata batch; and issuing a journal write to write entries of the active metadata batch to the storage medium. A second trigger comprises determining that a batch open time exceeds a selected percentage of a moving average of data write durations. A third trigger comprises determining that a batch counter exceeds a count threshold. These triggers work together to reduce I/O latencies.
-
公开(公告)号:US10824435B2
公开(公告)日:2020-11-03
申请号:US16283854
申请日:2019-02-25
Applicant: VMWARE, INC.
Inventor: Pradeep Krishnamurthy , Srikanth Mahabalarao , Prasanna Aithal , Mahesh Hiregoudar
IPC: G06F9/46 , G06F9/38 , G06F9/50 , G06F9/455 , G06F16/182 , G06F11/30 , G06F16/188 , G06F17/11 , G06F3/06 , H04L12/911 , G06F21/62
Abstract: A method is provided for a computer to allocate a resource from a clustered file system (CFS) volume stored on one or more physical storage devices to a file. The CFS volume includes resources organized into resource clusters and the resource clusters make up regions. The method includes, for each region of resource clusters, determining a first count of resources allocated to the host computer and a second count of resources allocated to all other host computers, and calculating a region weight based on the first count and the second count. The method further includes sorting a list of the regions based on their region weights, selecting a region at or near the start of the list, and allocating the resource from a resource cluster in the selected region to the file.
-
公开(公告)号:US10802741B2
公开(公告)日:2020-10-13
申请号:US16273179
申请日:2019-02-12
Applicant: VMWARE, INC.
Inventor: Pradeep Krishnamurthy
IPC: G06F3/06 , G06F12/02 , G06F16/17 , G06F16/176 , G06F9/455
Abstract: The disclosure provides an approach for zeroing allocated storage blocks of a file. The blocks are zeroed in the background, during a normal operation of a storage system, thus lowering the chance that the latency of a storage operation would be increased by the zeroing process. The approach also precludes a delay in being able to use the file, the delay caused by pre-zeroing the storage blocks prior to use of the file.
-
公开(公告)号:US11334249B2
公开(公告)日:2022-05-17
申请号:US17029851
申请日:2020-09-23
Applicant: VMware, Inc.
Inventor: Pradeep Krishnamurthy , Prasanna Aithal
IPC: G06F3/06
Abstract: The disclosure herein describes managing a rate of processing unmap requests for a data storage volume. Unmap requests are received from a cluster of active hosts that are associated with the data storage volume. Latency data values of each active host are then accessed. A long-term cluster latency average value is calculated based on the accessed latency data values of all active hosts over a long-term time period and a short-term cluster latency average value is calculated based on the accessed latency data values of all active hosts over a short-term time period. An unmap rate adjustment value is calculated based on a difference between the long-term cluster latency average value and the short-term cluster latency average value. The rate of processing unmap requests for the data storage volume is adjusted based on the unmap rate adjustment value and the unmap requests are performed based on the adjusted rate.
-
公开(公告)号:US11036694B2
公开(公告)日:2021-06-15
申请号:US15615848
申请日:2017-06-07
Applicant: VMWARE, INC.
Inventor: Asit Desai , Prasanna Aithal , Bryan Branstetter , Rohan Pasalkar , Prasad Rao Jangam , Mahesh S Hiregoudar , Pradeep Krishnamurthy , Srinivasa Shantharam
IPC: G06F16/00 , G06F16/188 , G06F16/11 , G06F16/13 , G06F9/455
Abstract: The systems described herein are configured to enhance the efficiency of memory usage and access in a VM file system data store with respect to allocating memory in large and small file block clusters using affinity metadata and propagating and maintaining the affinity metadata in support of the described allocation. In order to maintain affinity metadata of the large file block cluster, affinity generation values stored on the large file block cluster are read and cached affinity generation values for each small file block cluster are read from an in-memory cache associated with the large file block cluster. When the stored affinity generation values and the cached affinity generation values do not match, affinity metadata from all the small file block clusters associated with the large file block cluster is used to update the affinity metadata of the large file block cluster and the associated cache.
-
公开(公告)号:US10649958B2
公开(公告)日:2020-05-12
申请号:US15615847
申请日:2017-06-07
Applicant: VMWARE, INC.
Inventor: Asit Desai , Prasanna Aithal , Bryan Branstetter , Rohan Pasalkar , Prasad Rao Jangam , Mahesh S Hiregoudar , Pradeep Krishnamurthy , Srinivasa Shantharam
Abstract: The systems described herein are configured to enhance the efficiency of memory usage and access in a VM file system data store with respect to allocating memory in large and small file block clusters using affinity metadata and propagating and maintaining the affinity metadata in support of the described allocation. During storage of file data, an affinity identifier of the file data is determined. The affinity identifier is used to identify a large file block cluster and a small file block cluster within the identified large file block cluster. The file data is stored in the selected small file block cluster and affinity metadata of the selected small file block cluster is updated to reflect the storage of the file data.
-
公开(公告)号:US10296454B2
公开(公告)日:2019-05-21
申请号:US15672339
申请日:2017-08-09
Applicant: VMWARE, INC.
Inventor: Prasad Rao Jangam , Asit Desai , Prasanna Aithal , Bryan Branstetter , Mahesh S Hiregoudar , Srinivasa Shantharam , Pradeep Krishnamurthy , Raghavan Pichai , Rohan Pasalkar
Abstract: The systems described herein are configured to enhance the efficiency of memory in a host file system with respect to hosted virtual file systems. In situations when the hosted virtual file systems use smaller file block sizes than the file block sizes of the host file system. During storage of a file, a file block is assigned a block address and unmapping bits. The block address and unmapping bits are stored in a pointer block or other similar data structure associated with the file. Particularly, the block address is stored in a first address block and the unmapping bits are stored in at least one additional address block located in proximity to the block address, such that the unmap granularity of the file is not limited by the fixed size of address blocks in the system.
-
-
-
-
-
-
-