Abstract:
Techniques for surfacing host-side flash storage capacity to a plurality of VMs running on a host system are provided. In one embodiment, the host system creates, for each VM in the plurality of VMs, a flash storage space allocation in a flash storage device that is locally attached to the host system. The host system then causes the flash storage space allocation to be readable and writable by the VM as a virtual flash memory device.
Abstract:
Techniques for utilizing flash storage as an extension of hard disk (HDD) based storage are provided. In one embodiment, a computer system can store a first subset of blocks of a logical file in a first physical file residing on a flash storage tier, and a second subset of blocks of the logical file in a second physical file residing on an HDD storage tier. The computer system can then receive an I/O request directed to one or more blocks of the logical file and process the I/O request by accessing the flash storage tier or the HDD storage tier, the accessing being based on whether the one or more blocks are part of the first subset of blocks stored in the first physical file.
Abstract:
Techniques for automatically allocating space in a flash storage-based cache are provided. In one embodiment, a computer system collects I/O trace logs for a plurality of virtual machines or a plurality of virtual disks and determines cache utility models for the plurality of virtual machines or the plurality of virtual disks based on the I/O trace logs. The cache utility model for each virtual machine or each virtual disk defines an expected utility of allocating space in the flash storage-based cache to the virtual machine or the virtual disk over a range of different cache allocation sizes. The computer system then calculates target cache allocation sizes for the plurality of virtual machines or the plurality of virtual disks based on the cache utility models and allocates space in the flash storage-based cache based on the target cache allocation sizes.
Abstract:
An I/O hint framework is provided. In one embodiment, a computer system can receive an I/O command originating from a virtual machine (VM), where the I/O command identifies a data block of a virtual disk. The computer system can further extract hint metadata from the I/O command, where the hint metadata includes one or more characteristics of the data block that are relevant for determining how to cache the data block in a flash storage-based cache. The computer system can then make the hint metadata available to a caching module configured to manage the flash storage-based cache.
Abstract:
Techniques for using a host-side cache to accelerate virtual machine (VM) I/O are provided. In one embodiment, the hypervisor of a host system can intercept an I/O request from a VM running on the host system, where the I/O request is directed to a virtual disk residing on a shared storage device. The hypervisor can then process the I/O request by accessing a host-side cache that resides one or more cache devices distinct from the shared storage device, where the accessing of the host-side cache is transparent to the VM.
Abstract:
Techniques for using a cache to accelerate virtual machine (VM) I/O are provided. In one embodiment, a host system can intercept an I/O request from a VM running on the host system, where the I/O request is directed to a virtual disk residing on a shared storage device. The host system can then process the I/O request by accessing a cache that resides on one or more cache devices directly attached to the host system, where the accessing of the cache is transparent to the VM.
Abstract:
Techniques for using a cache to accelerate virtual machine (VM) I/O are provided. In one embodiment, a host system can intercept an I/O request from a VM running on the host system, where the I/O request is directed to a virtual disk residing on a shared storage device. The host system can then process the I/O request by accessing a cache that resides on one or more cache devices directly attached to the host system, where the accessing of the cache is transparent to the VM.
Abstract:
Techniques for using a cache to accelerate virtual machine (VM) I/O are provided. In one embodiment, a host system can intercept an I/O request from a VM running on the host system, where the I/O request is directed to a virtual disk residing on a shared storage device. The host system can then process the I/O request by accessing a cache that resides on one or more cache devices directly attached to the host system, where the accessing of the cache is transparent to the VM.
Abstract:
Techniques for utilizing flash storage as an extension of hard disk (HDD) storage are provided. In one embodiment, a computer system stores a subset of blocks of a logical file in a first physical file, associated with a first data structure that represents a filesystem object, on flash storage and a subset of blocks, associated with a second data structure that represents a filesystem object comprising tiering configuration information that includes an identifier of the first physical file, in a second physical file on HDD storage. The computer system processes an I/O request directed to the logical file by directing it to either the physical file on the flash storage or the HDD storage by verifying that the tiering configuration information exists in the data structure and determining whether the one or more blocks are part of the first subset of blocks or the second subset of blocks.
Abstract:
A technique for efficient cache management demotes a unit of data from a higher cache level to a lower cache level in a cache hierarchy when the higher level cache evicts the unit of data. In a virtualization computing environment, eviction of the unit of data may be inferred by observing privileged memory and disk operations performed by a guest operating system and trapped by virtualization software for execution. When the unit of data is inferred to be evicted, the unit of data is demoted by transferring the unit of data into the lower cache level. This technique enables exclusive caching without direct involvement or modification of the guest operating system. In alternative embodiments, a pseudo-driver installed within the guest operating system explicitly tracks memory operations and transmits page eviction information to the lower level cache, which is able to cache evicted pages while maintaining cache exclusivity.