-
公开(公告)号:US20180275916A1
公开(公告)日:2018-09-27
申请号:US15465497
申请日:2017-03-21
Applicant: VMware, Inc.
Inventor: Adrian Marinescu
IPC: G06F3/06
CPC classification number: G06F3/0659 , G06F3/0604 , G06F3/064 , G06F3/0656 , G06F3/0685
Abstract: The present disclosure describes processing a write command directed to a block-based main storage device, and having a target logical sector and write data. The processing may include writing an address of a physical sector in the main storage device that contains the target logical sector to a header portion of a scratch block stored in a byte-addressable storage. The write data may be written to a slot the scratch block. The scratch block may be committed a scratch block in persistent storage. Subsequent to processing the write command, a write completion response may be signaled to the sender of the write command to indicate to the sender completion of the write command, without having committed the write data to the main storage device. Write data from several write commands may be subsequently committed to the main storage device.
-
公开(公告)号:US20170351437A1
公开(公告)日:2017-12-07
申请号:US15174631
申请日:2016-06-06
Applicant: VMware, Inc.
Inventor: Adrian Marinescu , Thorbjoern Donbaek
IPC: G06F3/06
CPC classification number: G06F3/0611 , G06F3/0613 , G06F3/0635 , G06F3/067 , G06F3/0685 , G06F9/45533 , G06F9/45545
Abstract: The current document is directed to a storage stack subsystem of a computer system that transfers data between memory and various data-storage devices and subsystems and that processes I/O requests at a greater rate than conventional storage stacks. In one implementation, the disclosed storage stack includes a latency monitor, an I/O-scheduling bypass pathway, and short-circuit switch, controlled by the latency monitor. While the latency associated with I/O-request execution remains below a threshold latency, I/O-scheduling components of the storage stack are bypassed, with I/O requests routed directly to multiple input queues associated with one or more high-throughput multi-queue I/O device controllers. When the latency for execution of I/O requests rises above the threshold latency, I/O requests are instead directed to I/O-scheduling components of the storage stack, which attempt to optimally reorganize the incoming I/O-request stream and optimally distribute I/O-requests among multiple input queues associated I/O device controllers.
-
公开(公告)号:US10152278B2
公开(公告)日:2018-12-11
申请号:US15465497
申请日:2017-03-21
Applicant: VMware, Inc.
Inventor: Adrian Marinescu
IPC: G06F3/06
Abstract: The present disclosure describes processing a write command directed to a block-based main storage device, and having a target logical sector and write data. The processing may include writing an address of a physical sector in the main storage device that contains the target logical sector to a header portion of a scratch block stored in a byte-addressable storage. The write data may be written to a slot the scratch block. The scratch block may be committed a scratch block in persistent storage. Subsequent to processing the write command, a write completion response may be signaled to the sender of the write command to indicate to the sender completion of the write command, without having committed the write data to the main storage device. Write data from several write commands may be subsequently committed to the main storage device.
-
公开(公告)号:US20170351441A1
公开(公告)日:2017-12-07
申请号:US15174376
申请日:2016-06-06
Applicant: VMware, Inc.
Inventor: Adrian Marinescu
CPC classification number: G06F3/0619 , G06F3/0631 , G06F3/065 , G06F3/067 , G06F9/5027
Abstract: The current document is directed to an efficient and non-blocking mechanism for flow control within a multi-processor or multi-core processor with hierarchical memory caches. Traditionally, a centralized shared-computational-resource access pool, accessed using a locking operation, is used to control access to a shared computational resource within a multi-processor system or multi-core processor. The efficient and non-blocking mechanism for flow control, to which the current document is directed, distributes local shared-computational-resource access pools to each core of a multi-core processor and/or to each processor of a multi-processor system, avoiding significant computational overheads associated with cache-controller contention-control for a traditional, centralized access pool and associated with use of locking operations for access to the access pool.
-
公开(公告)号:US11301142B2
公开(公告)日:2022-04-12
申请号:US15174376
申请日:2016-06-06
Applicant: VMware, Inc.
Inventor: Adrian Marinescu
Abstract: The current document is directed to an efficient and non-blocking mechanism for flow control within a multi-processor or multi-core processor with hierarchical memory caches. Traditionally, a centralized shared-computational-resource access pool, accessed using a locking operation, is used to control access to a shared computational resource within a multi-processor system or multi-core processor. The efficient and non-blocking mechanism for flow control, to which the current document is directed, distributes local shared-computational-resource access pools to each core of a multi-core processor and/or to each processor of a multi-processor system, avoiding significant computational overheads associated with cache-controller contention-control for a traditional, centralized access pool and associated with use of locking operations for access to the access pool.
-
公开(公告)号:US11144227B2
公开(公告)日:2021-10-12
申请号:US15698636
申请日:2017-09-07
Applicant: VMware, Inc.
Inventor: Adrian Marinescu , Glen McCready
IPC: G06F3/06 , G06F16/174 , G06F11/14
Abstract: Techniques for implementing content-based post-process data deduplication are provided. In one set of embodiments, a computer system can receive a write request comprising write data to be persisted to a storage system and can sample a portion of the write data. The computer system can further execute one or more analyses on the sampled portion in order to determine whether the write data is a good deduplication candidate that is likely to contain redundancies which can be eliminated via data deduplication. If the one or more analyses indicate that the write data is a good deduplication candidate, the computer system can cause the write data to be persisted to a staging storage component of the storage system. Otherwise, the computer system can cause the write data to be persisted to a primary storage component of the storage system that is separate from the staging storage component.
-
公开(公告)号:US20190073151A1
公开(公告)日:2019-03-07
申请号:US15698636
申请日:2017-09-07
Applicant: VMware, Inc.
Inventor: Adrian Marinescu , Glen McCready
Abstract: Techniques for implementing content-based post-process data deduplication are provided. In one set of embodiments, a computer system can receive a write request comprising write data to be persisted to a storage system and can sample a portion of the write data. The computer system can further execute one or more analyses on the sampled portion in order to determine whether the write data is a good deduplication candidate that is likely to contain redundancies which can be eliminated via data deduplication. If the one or more analyses indicate that the write data is a good deduplication candidate, the computer system can cause the write data to be persisted to a staging storage component of the storage system. Otherwise, the computer system can cause the write data to be persisted to a primary storage component of the storage system that is separate from the staging storage component.
-
公开(公告)号:US10108349B2
公开(公告)日:2018-10-23
申请号:US15174631
申请日:2016-06-06
Applicant: VMware, Inc.
Inventor: Adrian Marinescu , Thorbjoern Donbaek
Abstract: The current document is directed to a storage stack subsystem of a computer system that transfers data between memory and various data-storage devices and subsystems and that processes I/O requests. In one implementation, the disclosed storage stack includes a latency monitor, an I/O-scheduling bypass pathway, and short-circuit switch, controlled by the latency monitor. While the latency associated with I/O-request execution remains below a threshold latency, I/O-scheduling components of the storage stack are bypassed, with I/O requests routed directly to multiple input queues associated with one or more high-throughput multi-queue I/O device controllers. When the latency for execution of I/O requests rises above the threshold latency, I/O requests are instead directed to I/O-scheduling components of the storage stack, which attempt to optimally reorganize the incoming I/O-request stream and optimally distribute I/O-requests among multiple input queues associated I/O device controllers.
-
-
-
-
-
-
-