摘要:
Embodiments of the present invention provide performance isolation for storage clouds. Under one embodiment, workloads across a storage cloud architecture are grouped into clusters based on administrator or system input. A performance isolation domain is then created for each of the clusters, with each of the performance isolation domains comprising a set of data stores associated with a set of storage subsystems and a set of data paths that connect the set of data stores to a set of clients. Thereafter, performance isolation is provided among a set of layers of the performance isolation domains. Such performance isolation is provided by (among other things): pooling data stores from separate performance isolation domains into separate pools; assigning the pools to device adapters, RAID controller, and the set of storage subsystems; preventing workloads on the device adapters from exceeding capacities of the device adapters; mapping the set of data stores to a set of Input/Output (I/O) servers based on an I/O capacity and I/O load of the set of I/O servers; and/or pairing ports of the set of I/O servers with ports of the set of storage subsystems, the pairing being based upon availability, connectivity, I/O load, and I/O capacity.
摘要:
Embodiments of the present invention provide performance isolation for storage clouds. Under one embodiment, workloads across a storage cloud architecture are grouped into clusters based on administrator or system input. A performance isolation domain is then created for each of the clusters, with each of the performance isolation domains comprising a set of data stores associated with a set of storage subsystems and a set of data paths that connect the set of data stores to a set of clients. Thereafter, performance isolation is provided among a set of layers of the performance isolation domains. Such performance isolation is provided by (among other things): pooling data stores from separate performance isolation domains into separate pools; assigning the pools to device adapters, RAID controller, and the set of storage subsystems; preventing workloads on the device adapters from exceeding capacities of the device adapters; mapping the set of data stores to a set of Input/Output (I/O) servers based on an I/O capacity and I/O load of the set of I/O servers; and/or pairing ports of the set of I/O servers with ports of the set of storage subsystems, the pairing being based upon availability, connectivity, I/O load, and I/O capacity.
摘要:
Embodiments of the present invention provide an approach for adapting an information extraction middleware for a clustered computing environment (e.g., a cloud environment) by creating and managing a set of statistical models generated from performance statistics of operating devices within the clustered computing environment. This approach takes into account the required accuracy in modeling, including computation cost of modeling, to pick the best modeling solution at a given point in time. When higher accuracy is desired (e.g., nearing workload saturation), the approach adapts to use an appropriate modeling algorithm. Adapting statistical models to the data characteristics ensures optimal accuracy with minimal computation time and resources for modeling. This approach provides intelligent selective refinement of models using accuracy-based and operating probability-based triggers to optimize the clustered computing environment, i.e., maximize accuracy and minimize computation time.
摘要:
The present invention proactively identifies hotspots in a cloud computing environment through cloud resource usage models that use workload parameters as inputs. In some embodiments the cloud resource usage models are based upon performance data from cloud resources and time series based workload trend models. Hotspots may occur and can be detected at any layer of the cloud computing environment, including the server, storage, and network level. In a typical embodiment, parameters for a workload are identified in the cloud computing environment and inputted into a cloud resource usage model. The model is run with the inputted workload parameters to identify potential hotspots, and resources are then provisioned for the workload so as to avoid these hotspots.
摘要:
Embodiments of the present invention provide an integrated host and subsystem port selection methodology that uses performance measurements combined with information about active data paths. This technique also helps in resilient fabric planning by selecting ports from redundant fabrics. In a typical embodiment, host port to storage port pairs that create a path between a host and a storage device will be identified. From these pairs, a set of host port to storage port candidates for communicate data from the host to the storage device will be identified based on a set of resiliency constraints. Then, a specific host port to storage port pair will be selected from the set based on a lowest joint workload measurement. A path will then be created between the specific host port and storage port, and data will be communicated from the host to the storage device via the path.
摘要:
Embodiments of the present invention provide an integrated host and subsystem port selection methodology that uses performance measurements combined with information about active data paths. This technique also helps in resilient fabric planning by selecting ports from redundant fabrics. In a typical embodiment, host port to storage port pairs that create a path between a host and a storage device will be identified. From these pairs, a set of host port to storage port candidates for communicate data from the host to the storage device will be identified based on a set of resiliency constraints. Then, a specific host port to storage port pair will be selected from the set based on a lowest joint workload measurement. A path will then be created between the specific host port and storage port, and data will be communicated from the host to the storage device via the path.
摘要:
In general, embodiments of present invention provide an approach for calibrating a cloud computing environment. Specifically, embodiments of the present invention provide an empirical approach for obtaining end-to-end performance characteristics for workloads in the cloud computing environment (hereinafter the “environment”). In a typical embodiment, different combinations of cloud server(s) and cloud storage unit(s) are determined. Then, a virtual machine is deployed to one or more of the servers within the cloud computing environment. The virtual machine is used to generate a desired workload on a set of servers within the environment. Thereafter, performance measurements for each of the different combinations under the desired workload will be taken. Among other things, the performance measurements indicate a connection quality between the set of servers and the set of storage units, and are used in calibrating the cloud computing environment to determine future workload placement. Along these lines, the performance measurements can be populated into a table or the like, and a dynamic map of a data center having the set of storage units can be generated.
摘要:
Embodiments of the present invention provide an approach for intelligent storage planning and planning within a clustered computing environment (e.g., a cloud computing environment). Specifically, embodiments of the present invention will first determine/identify a set of storage area network volume controllers (SVCs) that is accessible from a host that has submitted a request for access to storage. Thereafter, a set of managed disk (mdisk) groups (i.e., corresponding to the set of SVCs) that are candidates for satisfying the request will be determined. This set of mdisk groups will then be filtered based on available space therein, a set of user/requester preferences, and optionally, a set of performance characteristics. Then, a particular mdisk group will be selected from the set of mdisk groups based on the filtering.
摘要:
Embodiments of the present invention provide an approach for adapting an information extraction middleware for a clustered computing environment (e.g., a cloud environment) by creating and managing a set of statistical models generated from performance statistics of operating devices within the clustered computing environment. This approach takes into account the required accuracy in modeling, including computation cost of modeling, to pick the best modeling solution at a given point in time. When higher accuracy is desired (e.g., nearing workload saturation), the approach adapts to use an appropriate modeling algorithm. Adapting statistical models to the data characteristics ensures optimal accuracy with minimal computation time and resources for modeling. This approach provides intelligent selective refinement of models using accuracy-based and operating probability-based triggers to optimize the clustered computing environment, i.e., maximize accuracy and minimize computation time.
摘要:
The present invention proactively identifies hotspots in a cloud computing environment through cloud resource usage models that use workload parameters as inputs. In some embodiments the cloud resource usage models are based upon performance data from cloud resources and time series based workload trend models. Hotspots may occur and can be detected at any layer of the cloud computing environment, including the server, storage, and network level. In a typical embodiment, parameters for a workload are identified in the cloud computing environment and inputted into a cloud resource usage model. The model is run with the inputted workload parameters to identify potential hotspots, and resources are then provisioned for the workload so as to avoid these hotspots.