摘要:
A service level objective (SLO) violation is detected for a workload of a networked storage system, based on a performance metric not being satisfied for the workload. In response to detecting the SLO violation, a controller determines that changing a level of caching at a node of the networked storage system will improve the performance metric for the workload. The controller implements the change by adjusting an operation of a virtual cache appliance (VCA) of the networked storage system. The adjusting can be instantiating a new VCA, or adjusting the level of caching at an existing VCA. The adjusting can be for caching related to the workload itself, or it can be caching for an intertering workload.
摘要:
Collaborative management of shared resources is implemented by a storage server receiving, from a first resource manager, notification of a violation for a service provided by the storage server or device coupled to the storage server. The storage server further receives, from each of a plurality of resource managers, an estimated cost of taking a corrective action to mitigate the violation and selects a corrective action proposed by one of the plurality of resource managers based upon the estimated cost. The storage server directs the resource manager that proposed the selected corrective action to perform the selected corrective action.
摘要:
A storage placement planning system receives a resource graph describing SAN's resources and virtual machine applications requiring a particular amount of a processing resource element and a storage resource element. The system then determines a coupled placement of the processing element and storage element for each of the applications on a coupled pair of the resource nodes based on a specified throughput and a distance factor between coupled pairs of resource nodes. The coupled placement is determined using an algorithm that implements a cost function that determines affinities between processing nodes and storage nodes for each of said applications of a particular workload. The coupled placement for each of said applications identifies the particular amount of processing resource element placed on a first node for providing a processing resource and the particular amount of storage resource element placed on a second node for providing a storage resource for that application.
摘要:
Deduplication of data using a low-latency random read memory (LLRRM) is described herein. Upon receiving a block, if a matching block stored on a disk device is found, the received block is deduplicated by producing an index to the address location of the matching block. In some embodiments, a matching block having a predetermined threshold number of associated indexes that reference the matching block is transferred to LLRRM, the threshold number being one or greater. Associated indexes may be modified to reflect the new address location in LLRRM. Deduplication may be performed using a mapping mechanism containing mappings of deduplicated blocks to matching blocks, the mappings being used for performing read requests. Deduplication described herein may reduce read latency as LLRRM has relatively low latency in performing random read requests relative to disk devices.
摘要:
A mechanism is provided to automatically retrieve zoning best practices from a centralized repository and to ensure that automatically generated zones do not violate these best practices. A user selects a set of hosts and storage controllers. The user also selects a guidance policy for creating the zone, and also selects a set of validation policies that must be enforced on the zone. If the user selects a guidance policy and a validation policy combination that is incompatible, the mechanism allows the user to change either the selected guidance policy or the set of validation policies. If the user has selected consistent-zoning as a guidance policy, then the mechanism automatically selects a guidance policy that does not violate the known validation policies.
摘要:
Provided is an article of manufacture, system and method for a management system for using host and storage controller port information to configure paths between a host and storage controller in a network. A management system is coupled to a network, wherein the management system communicates over the network with a plurality of hosts, storage controllers, and a network monitor to configure paths in the network between the hosts and the storage controllers in order for the storage controller to provide storage services to the hosts. The network monitor collects statistics from the components in the network. The management system obtains from the network monitor information on ports on at least one host, ports on at least one storage controller managing access to storage volumes, and at least one fabric over which the at least one host and storage controller ports connect. The management system gathers, for at least one host port and storage controller port, information on a connection metric indicating a number of paths in which the port is configured and a traffic metric indicating Input/Output (I/O) traffic at the port. The management system processes the connection and traffic metrics for the host and storage ports to select at least one host port and at least one storage controller port. The management system configures the at least one selected host and storage controller port pair to provide at least one path enabling the host to communicate with the selected storage controller port to access at least one storage volume managed by the selected storage controller.
摘要:
Provided are a system and article of manufacture for using host and storage controller port information to configure paths between a host and storage controller. Information is gathered on ports on at least one host, ports on at least one storage controller managing access to storage volumes, and at least one fabric over which the at least one host and storage controller ports connect. For at least one host port and storage controller port, information is gathered on a connection metric related to a number of paths in which the port is configured and a traffic metric indicating Input/Output (I/O) traffic at the port. A determination is made of available ports for one host and storage controller that are available to provide paths between one host and storage controller. The connection and traffic metrics for the available host ports are processed to select at least one host port. The connection and traffic metrics for the available storage controller ports are processed to select at least one storage controller port. The at least one selected host and storage controller port pair are configured to provide at least one path enabling the host to communicate with the selected storage controller port to access at least one storage volume managed by the selected storage controller.
摘要:
A system, program storage device, and method of optimizing data placement on a storage device, the method comprising establishing a specified time constraint for which the storage device is to delete data stored thereon; dividing a data object into a plurality of data bits; programming a block of data and the data bits with a logic operand if the storage device is incapable of deleting the data within the specified time constraint; creating an encoded block of data from the programmed block of data and the data bits; organizing the encoded block of data and the data bits in the storage device according to data deletion requirements; and removing the data bits from the storage device if the data bits are organized within a specified data deletion requirement, wherein the data bits are removed using a data shredding process, and wherein the logic operand comprises an exclusive-or (XOR) operator.
摘要:
When an alarm condition relating to a performance goal of a storage system is detected, a storage management system invokes an N-step lookahead engine for simulating operation of the storage system when there are multiple actions that could be taken by the storage system for eliminating the alarm condition. The N-step lookahead engine generates N possible system states based on a current state of the storage system. The N possible states are based on a cost model of each of the multiple actions. Each cost model is based on an action, a behavior implication of the action, a resource implication of the action and a transient cost of the action. An action is selected that generates a system state that optimizes the stability, a prerequisite and a transient cost of invoking the selected action.
摘要:
An intelligent offload engine to configure protocol processing between a host and the intelligent offload engine in order to improve optimization of protocol processing is provided. The intelligent offload engine provides for evaluating the host and the host environment to identify system parameters associated with the host and a host bus adapter card, wherein the intelligent offload engine exists at the host bus adapter card. Also, the intelligent offload engine determines the ability of the host and the intelligent offload engine to perform protocol processing according to the identified system parameters. In addition, the intelligent offload engine determines an optimal protocol processing configuration between the host and the intelligent offload engine, according to the determined ability of the host to perform protocol processing and the intelligent offload engine ability to perform protocol processing. Moreover, the intelligent offload engine implements the determined optimal protocol processing configuration.