Abstract:
One or more embodiments of the present invention provide a technique for effectively managing virtualized computing systems with an unlimited number of hardware resources. Host systems included in a virtualized computer system are organized into a scalable, peer-to-peer (P2P) network in which host systems arrange themselves into a network overlay to communicate with one another. The network overlay enables the host systems to perform a variety of operations, which include dividing computing resources of the host systems among a plurality of virtual machines (VMs), load balancing VMs across the host systems, and performing an initial placement of a VM in one of the host systems.
Abstract:
Maximum throughput of a storage unit, and workload and latency values of the storage unit corresponding to a predefined fraction of the maximum throughput are estimated based on workloads and latencies that are monitored on the storage unit. The computed metrics are usable in a variety of different applications including admission control, storage load balancing, and enforcing quality of service in a shared storage environment.
Abstract:
Embodiments perform centralized input/output (I/O) path selection for hosts accessing storage devices in distributed resource sharing environments. The path selection accommodates loads along the paths through the fabric and at the storage devices. Topology changes may also be identified and automatically initiated. Some embodiments contemplate the hosts executing a plurality of virtual machines (VMs) accessing logical unit numbers (LUNs) in a storage area network (SAN).
Abstract:
Systems and methods are disclosed that allow for transparently recovering from an uncorrected multi-bit error of arbitrary length located at a memory address. Storing one or more parity pages, for a set of pages in system memory, such that a page in the set of pages may be reconstructed using one of the parity pages is disclosed. Storing an indication of one or more page'disk location such that the one or more pages may be reconstructed by refilling the page from disk is also disclosed.
Abstract:
An anomaly in a shared input/ouput (IO) resource that is accessed by a plurality hosts or clients is detected when a host that is not bound by any QoS policy presents large workloads to a shared IO resource that is also accessed by hosts or clients that are governed by QoS policy. The anomaly detection triggers a response from the hosts or clients as a way to protect against the effect of the anomaly. The response is an increase in window sizes. The window sizes of the hosts or clients may be increased to the maximum window size or in proportion to their QoS shares.
Abstract:
In an example, a method for performing initial placement of a data object in a distributed system that includes a plurality of hardware resources includes receiving a request to create an instance of a data object; determining, in response to the request, a list of hardware resources that satisfy one or more criteria of the data object; creating, in response to the request, a virtual cluster that includes a subset of the hardware resources included in the list of hardware resources; selecting a hardware resource from the virtual cluster into which the data object is to be placed; placing the data object into the hardware resource; and releasing the virtual cluster.
Abstract:
A shared input/output (IO) resource is managed in a decentralized manner. Each of multiple hosts having IO access to the shared resource, computes an average latency value that is normalized with respect to average IO request sizes, and stores the computed normalized latency value for later use. The normalized latency values thus computed and stored may be used for a variety of different applications, including enforcing a quality of service (QoS) policy that is applied to the hosts, detecting a condition known as an anomaly where a host that is not bound by a QoS policy accesses the shared resource at a rate that impacts the level of service received by the plurality of hosts that are bound by the QoS policy, and migrating workloads between storage arrays to achieve load balancing across the storage arrays.