Logical to Physical Sector Size Adapter
    1.
    发明申请

    公开(公告)号:US20180275916A1

    公开(公告)日:2018-09-27

    申请号:US15465497

    申请日:2017-03-21

    Applicant: VMware, Inc.

    Inventor: Adrian Marinescu

    Abstract: The present disclosure describes processing a write command directed to a block-based main storage device, and having a target logical sector and write data. The processing may include writing an address of a physical sector in the main storage device that contains the target logical sector to a header portion of a scratch block stored in a byte-addressable storage. The write data may be written to a slot the scratch block. The scratch block may be committed a scratch block in persistent storage. Subsequent to processing the write command, a write completion response may be signaled to the sender of the write command to indicate to the sender completion of the write command, without having committed the write data to the main storage device. Write data from several write commands may be subsequently committed to the main storage device.

    METHOD AND SYSTEM THAT INCREASE STORAGE-STACK THROUGHPUT

    公开(公告)号:US20170351437A1

    公开(公告)日:2017-12-07

    申请号:US15174631

    申请日:2016-06-06

    Applicant: VMware, Inc.

    Abstract: The current document is directed to a storage stack subsystem of a computer system that transfers data between memory and various data-storage devices and subsystems and that processes I/O requests at a greater rate than conventional storage stacks. In one implementation, the disclosed storage stack includes a latency monitor, an I/O-scheduling bypass pathway, and short-circuit switch, controlled by the latency monitor. While the latency associated with I/O-request execution remains below a threshold latency, I/O-scheduling components of the storage stack are bypassed, with I/O requests routed directly to multiple input queues associated with one or more high-throughput multi-queue I/O device controllers. When the latency for execution of I/O requests rises above the threshold latency, I/O requests are instead directed to I/O-scheduling components of the storage stack, which attempt to optimally reorganize the incoming I/O-request stream and optimally distribute I/O-requests among multiple input queues associated I/O device controllers.

    Logical to physical sector size adapter

    公开(公告)号:US10152278B2

    公开(公告)日:2018-12-11

    申请号:US15465497

    申请日:2017-03-21

    Applicant: VMware, Inc.

    Inventor: Adrian Marinescu

    Abstract: The present disclosure describes processing a write command directed to a block-based main storage device, and having a target logical sector and write data. The processing may include writing an address of a physical sector in the main storage device that contains the target logical sector to a header portion of a scratch block stored in a byte-addressable storage. The write data may be written to a slot the scratch block. The scratch block may be committed a scratch block in persistent storage. Subsequent to processing the write command, a write completion response may be signaled to the sender of the write command to indicate to the sender completion of the write command, without having committed the write data to the main storage device. Write data from several write commands may be subsequently committed to the main storage device.

    NON-BLOCKING FLOW CONTROL IN MULTI-PROCESSING-ENTITY SYSTEMS

    公开(公告)号:US20170351441A1

    公开(公告)日:2017-12-07

    申请号:US15174376

    申请日:2016-06-06

    Applicant: VMware, Inc.

    Inventor: Adrian Marinescu

    CPC classification number: G06F3/0619 G06F3/0631 G06F3/065 G06F3/067 G06F9/5027

    Abstract: The current document is directed to an efficient and non-blocking mechanism for flow control within a multi-processor or multi-core processor with hierarchical memory caches. Traditionally, a centralized shared-computational-resource access pool, accessed using a locking operation, is used to control access to a shared computational resource within a multi-processor system or multi-core processor. The efficient and non-blocking mechanism for flow control, to which the current document is directed, distributes local shared-computational-resource access pools to each core of a multi-core processor and/or to each processor of a multi-processor system, avoiding significant computational overheads associated with cache-controller contention-control for a traditional, centralized access pool and associated with use of locking operations for access to the access pool.

    Non-blocking flow control in multi-processing-entity systems

    公开(公告)号:US11301142B2

    公开(公告)日:2022-04-12

    申请号:US15174376

    申请日:2016-06-06

    Applicant: VMware, Inc.

    Inventor: Adrian Marinescu

    Abstract: The current document is directed to an efficient and non-blocking mechanism for flow control within a multi-processor or multi-core processor with hierarchical memory caches. Traditionally, a centralized shared-computational-resource access pool, accessed using a locking operation, is used to control access to a shared computational resource within a multi-processor system or multi-core processor. The efficient and non-blocking mechanism for flow control, to which the current document is directed, distributes local shared-computational-resource access pools to each core of a multi-core processor and/or to each processor of a multi-processor system, avoiding significant computational overheads associated with cache-controller contention-control for a traditional, centralized access pool and associated with use of locking operations for access to the access pool.

    Content-based post-process data deduplication

    公开(公告)号:US11144227B2

    公开(公告)日:2021-10-12

    申请号:US15698636

    申请日:2017-09-07

    Applicant: VMware, Inc.

    Abstract: Techniques for implementing content-based post-process data deduplication are provided. In one set of embodiments, a computer system can receive a write request comprising write data to be persisted to a storage system and can sample a portion of the write data. The computer system can further execute one or more analyses on the sampled portion in order to determine whether the write data is a good deduplication candidate that is likely to contain redundancies which can be eliminated via data deduplication. If the one or more analyses indicate that the write data is a good deduplication candidate, the computer system can cause the write data to be persisted to a staging storage component of the storage system. Otherwise, the computer system can cause the write data to be persisted to a primary storage component of the storage system that is separate from the staging storage component.

    Content-Based Post-Process Data Deduplication

    公开(公告)号:US20190073151A1

    公开(公告)日:2019-03-07

    申请号:US15698636

    申请日:2017-09-07

    Applicant: VMware, Inc.

    Abstract: Techniques for implementing content-based post-process data deduplication are provided. In one set of embodiments, a computer system can receive a write request comprising write data to be persisted to a storage system and can sample a portion of the write data. The computer system can further execute one or more analyses on the sampled portion in order to determine whether the write data is a good deduplication candidate that is likely to contain redundancies which can be eliminated via data deduplication. If the one or more analyses indicate that the write data is a good deduplication candidate, the computer system can cause the write data to be persisted to a staging storage component of the storage system. Otherwise, the computer system can cause the write data to be persisted to a primary storage component of the storage system that is separate from the staging storage component.

    Method and system that increase storage-stack throughput

    公开(公告)号:US10108349B2

    公开(公告)日:2018-10-23

    申请号:US15174631

    申请日:2016-06-06

    Applicant: VMware, Inc.

    Abstract: The current document is directed to a storage stack subsystem of a computer system that transfers data between memory and various data-storage devices and subsystems and that processes I/O requests. In one implementation, the disclosed storage stack includes a latency monitor, an I/O-scheduling bypass pathway, and short-circuit switch, controlled by the latency monitor. While the latency associated with I/O-request execution remains below a threshold latency, I/O-scheduling components of the storage stack are bypassed, with I/O requests routed directly to multiple input queues associated with one or more high-throughput multi-queue I/O device controllers. When the latency for execution of I/O requests rises above the threshold latency, I/O requests are instead directed to I/O-scheduling components of the storage stack, which attempt to optimally reorganize the incoming I/O-request stream and optimally distribute I/O-requests among multiple input queues associated I/O device controllers.

Patent Agency Ranking