-
公开(公告)号:US20210216508A1
公开(公告)日:2021-07-15
申请号:US16808417
申请日:2020-03-04
Applicant: VMWARE, INC.
Inventor: Prasanth Jose , Pradeep Krishnamurthy , Gurudutt Kumar Vyudayagiri Jagannath , Vivek Patidar
Abstract: The disclosure provides for fault tolerant parallel journaling that speeds up both input/output (I/O) operations and recovery operations. Journal entry writing may occur in parallel with data writing operations. Even if a crash occurs during a data writing operation for which the journal entry has been written, the recovery operation will correctly determine that the journal entry is not valid. Additionally, recovery operations may need to validate fewer journal entries, and yet possibly retain more valid data. Examples include: for each of a plurality of journal entries: receiving incoming data; determining a signature for the incoming data; generating the journal entry for the incoming data; writing the signature in the journal entry; and writing the journal entry and the incoming data to a storage media; and based at least on writing data to the storage media, updating an awaiting index in a journal header.
-
公开(公告)号:US20220019541A1
公开(公告)日:2022-01-20
申请号:US17009773
申请日:2020-09-01
Applicant: VMWARE, INC.
Inventor: ANKIT DUBEY , Gurudutt Kumar Vyudayagiri Jagannath , Siddhant Gupta
IPC: G06F12/0875 , G06N20/00 , G06F16/28 , G06F13/16
Abstract: Techniques are disclosed for dynamically managing a cache. Certain techniques include clustering I/O requests into a plurality of clusters by a machine-learning clustering algorithm that collects the I/O requests into clusters of similar I/O requests based on properties of the I/O requests. Further, certain techniques include identifying, for a received I/O request, a cluster stored in the cache. Certain techniques further include loading a set of blocks of the identified cluster into the cache.
-
公开(公告)号:US11599269B2
公开(公告)日:2023-03-07
申请号:US17324179
申请日:2021-05-19
Applicant: VMWARE, INC.
Inventor: Prasanth Jose , Gurudutt Kumar Vyudayagiri Jagannath
Abstract: Reducing file write latency includes receiving incoming data, from a data source, for storage in a file and a target storage location for the incoming data, and determining whether the target storage location corresponds to a cache entry. Based on at least the target storage location not corresponding to a cache entry, the incoming data is written to a block pre-allocated for cache misses and the writing of the incoming data to the pre-allocated block is journaled. The writing of the incoming data is acknowledged to the data source. A process executing in parallel with the above commits the incoming data in the pre-allocated block with the file. Using this parallel process to commit the incoming data in the file removes high-latency operations (e.g., reading pointer blocks from the storage media) from a critical input/output path and results in more rapid write acknowledgement.
-
公开(公告)号:US11487670B2
公开(公告)日:2022-11-01
申请号:US17009773
申请日:2020-09-01
Applicant: VMWARE, INC.
Inventor: Ankit Dubey , Gurudutt Kumar Vyudayagiri Jagannath , Siddhant Gupta
IPC: G06F12/0875 , G06N20/00 , G06F13/16 , G06F16/28
Abstract: Techniques are disclosed for dynamically managing a cache. Certain techniques include clustering I/O requests into a plurality of clusters by a machine-learning clustering algorithm that collects the I/O requests into clusters of similar I/O requests based on properties of the I/O requests. Further, certain techniques include identifying, for a received I/O request, a cluster stored in the cache. Certain techniques further include loading a set of blocks of the identified cluster into the cache.
-
公开(公告)号:US20220300163A1
公开(公告)日:2022-09-22
申请号:US17324179
申请日:2021-05-19
Applicant: VMWARE, INC.
Inventor: Prasanth Jose , Gurudutt Kumar Vyudayagiri Jagannath
IPC: G06F3/06
Abstract: Reducing file write latency includes receiving incoming data, from a data source, for storage in a file and a target storage location for the incoming data, and determining whether the target storage location corresponds to a cache entry. Based on at least the target storage location not corresponding to a cache entry, the incoming data is written to a block pre-allocated for cache misses and the writing of the incoming data to the pre-allocated block is journaled. The writing of the incoming data is acknowledged to the data source. A process executing in parallel with the above commits the incoming data in the pre-allocated block with the file. Using this parallel process to commit the incoming data in the file removes high-latency operations (e.g., reading pointer blocks from the storage media) from a critical input/output path and results in more rapid write acknowledgement.
-
公开(公告)号:US11436200B2
公开(公告)日:2022-09-06
申请号:US16808417
申请日:2020-03-04
Applicant: VMWARE, INC.
Inventor: Prasanth Jose , Pradeep Krishnamurthy , Gurudutt Kumar Vyudayagiri Jagannath , Vivek Patidar
Abstract: The disclosure provides for fault tolerant parallel journaling that speeds up both input/output (I/O) operations and recovery operations. Journal entry writing may occur in parallel with data writing operations. Even if a crash occurs during a data writing operation for which the journal entry has been written, the recovery operation will correctly determine that the journal entry is not valid. Additionally, recovery operations may need to validate fewer journal entries, and yet possibly retain more valid data. Examples include: for each of a plurality of journal entries: receiving incoming data; determining a signature for the incoming data; generating the journal entry for the incoming data; writing the signature in the journal entry; and writing the journal entry and the incoming data to a storage media; and based at least on writing data to the storage media, updating an awaiting index in a journal header.
-
-
-
-
-