摘要:
Provided is a method for managing a network providing Input/Output (I/O) paths between a plurality of host systems and storage volumes in storage systems. An application service connection definition is provided for each connection from a host to a storage volume. At least one service level guarantee definition is provided indicating performance criteria to satisfy service requirements included in at least one service level agreement with at least one customer for network resources. Each service level guarantee definition is associated with at least one application service connection definition. Monitoring is performed as to whether Input/Output (I/O) requests transmitted through the multiple I/O paths satisfy performance criteria indicated in the service level guarantee definition associated with the I/O paths.
摘要:
In one embodiment, a method comprises, using at least one processor, controlling communication between Service Level Agreement (SLA) processes of an SLA services module and at least one I/O performance gateway; and using a thread pair associated with each of the at least one processors, processing inbound signals from the at least one I/O erformance atewa being sent to the SLA services module via an inbound thread, and processing outbound signals to the at least one I/O performance gateway received from the SLA services module via an outbound thread, wherein the inbound thread and the outbound thread operate asynchronously to provide non-blocking messaging.
摘要:
A system for utilizing informed throttling to guarantee quality of service to a plurality of clients includes a server core having a performance analyzer that compares a performance level received by a client to a corresponding contracted service level and determines if the client qualifies as a victim whose received performance level is less than the corresponding contracted service level. The performance analyzer is further configured to identify one or more candidates for throttling in response to an I/O stream receiving insufficient resources by determining if the client qualifies as a candidate whose received performance level is better than the corresponding contracted service level. The server core further includes a scheduler that selectively and dynamically issues a throttling command to the candidate client, and provides a quality of service enforcement point by concurrently monitoring a plurality of I/O streams to candidate clients and concurrently throttling commands to the candidate clients.
摘要:
The present system and associated method resolve the problem of providing statistical performance guarantees for applications generating streams of read/write accesses (I/Os) on a shared, potentially distributed storage system of finite resources, by initiating throttling whenever an I/O stream is receiving insufficient resources. The severity of throttling is determined in a dynamic, adaptive way at the storage subsystem level. Global, real-time knowledge about I/O streams is used to apply controls to guarantee quality of service to all I/O streams, providing dynamic control rather than reservation of bandwidth or other resources when an I/O stream is created that will always be applied to that I/O stream. The present system throttles at control points to distribute resources that are not co-located with the control point. A competition model is used with service time estimators in addition to estimated workload characteristics to determine which I/O needs to be throttled and the level of throttling required. A decision point issues throttling commands to enforcement points and selects which streams, and to what extent, need to be throttled.
摘要:
A system for utilizing informed throttling to guarantee quality of service to a plurality of clients includes a server core having a performance analyzer that compares a performance level received by a client to a corresponding contracted service level and determines if the client qualifies as a victim whose received performance level is less than the corresponding contracted service level. The performance analyzer is further configured to identify one or more candidates for throttling in response to an I/O stream receiving insufficient resources by determining if the client qualifies as a candidate whose received performance level is better than the corresponding contracted service level. The server core further includes a scheduler that selectively and dynamically issues a throttling command to the candidate client, and provides a quality of service enforcement point by concurrently monitoring a plurality of I/O streams to candidate clients and concurrently throttling commands to the candidate clients.
摘要:
The present invention discloses a method, apparatus and program storage device for providing non-blocking, minimum threaded two-way messaging. A Performance Monitor Daemon provides one non-blocked thread pair per processor to support a large number of connections. The thread pair includes an outbound thread for outbound communication and an inbound thread for inbound communication. The outbound thread and the inbound thread operate asynchronously.
摘要:
The present invention discloses a method, apparatus and program storage device for providing non-blocking, minimum threaded two-way messaging. A Performance Monitor Daemon provides one non-blocked thread pair per processor to support a large number of connections. The thread pair includes an outbound thread for outbound communication and an inbound thread for inbound communication. The outbound thread and the inbound thread operate asynchronously.