摘要:
Systems and methods for path selection for application commands are described. To this end, information associated with at least one application command that were processed at least one port of a target device is received. For a subsequent application command, a set of ports of the target device is determined. In one implementation, the set of ports is determined based on information associated with the subsequent application command. Once the set of ports is determined, the subsequent application command is directed to a port selected from the set of ports.
摘要:
Systems and methods for path selection for application commands are described. To this end, information associated with at least one application command that were processed at least one port of a target device is received. For a subsequent application command, a set of ports of the target device is determined. In one implementation, the set of ports is determined based on information associated with the subsequent application command. Once the set of ports is determined, the subsequent application command is directed to a port selected from the set of ports.
摘要:
A method generates input/output (IO) commands by plural different applications that execute on a host. The method prioritizes the applications by inserting different classifiers into the IO commands at a host bus adapter (HBA) located in the host. A storage device receives the IO commands and processes the IO commands according to priorities based on the classifiers for the applications.
摘要:
A method generates input/output (IO) commands by plural different applications that execute on a host. The method prioritizes the applications by inserting different classifiers into the IO commands at a host bus adapter (HBA) located in the host. A storage device receives the IO commands and processes the IO commands according to priorities based on the classifiers for the applications.
摘要:
Methods to provide workload performance control are described herein. Performance statistics for a plurality of workloads are obtained for a second time interval, which includes a plurality of first time intervals. The performance statistics is based on monitored data (220) obtained at each of the plurality of first time intervals. From the plurality of workloads, at least one workload having an anomaly in resource allocation is identified using the performance statistics. Resources, to at least mitigate the anomaly are associated with the at least one workload.
摘要:
The present invention relates to managing I/O requests in a storage system. By dynamically changing the scheduling parameters to achieve optimal turn around time for I/O requests pending for processing at a component in the storage system. The scheduling parameters are changed based on a feedback mechanism. The turn around time of the I/O request are calculated as the ratio of I/O request processing rate and the average number of I/O requests in the component.
摘要:
A method and system of a host device hosting multiple workloads for controlling flows of I/O requests directed to a storage device is disclosed. In one embodiment, a type of a response from the storage device reacting to an I/O request issued by an I/O stack layer of the host device is determined. Then, a workload associated with the I/O request is identified among the multiple workloads based on the response to the I/O request. Further, a maximum queue depth assigned to the workload is adjusted based on the type of the response, where the maximum queue depth is a maximum number of I/O requests from the workload which are concurrently issuable by the I/O stack layer.
摘要:
A method and system of a host device hosting multiple workloads for controlling flows of I/O requests directed to a storage device is disclosed. In one embodiment, a type of a response from the storage device reacting to an I/O request issued by an I/O stack layer of the host device is determined. Then, a workload associated with the I/O request is identified among the multiple workloads based on the response to the I/O request. Further, a maximum queue depth assigned to the workload is adjusted based on the type of the response, where the maximum queue depth is a maximum number of I/O requests from the workload which are concurrently issuable by the I/O stack layer.
摘要:
The present invention relates to managing I/O requests in a storage system. By dynamically changing the scheduling parameters to achieve optimal turn around time for I/O requests pending for processing at a component in the storage system. The scheduling parameters are changed based on a feedback mechanism. The turn around time of the I/O request are calculated as the ratio of I/O request processing rate and the average number of I/O requests in the component.
摘要:
Dynamic discovery of active peer applications and information related thereof in a network is described. In one embodiment of the present invention, the discovery and information related to peer applications is maintained by a plurality of network device peers. This information is supplemented by device or peer application failure information, which is identified through point-to-point communication initiated by a failure to receive a multicast packet from a particular network peer application.