摘要:
The present invention provides a system and method for provisioning Cloud services by establishing a Cloud services catalog using a Cloud service bus within a Cloud computing environment. In one embodiment, there is a Cloud services catalog manager configured to connect a plurality of Clouds in a Cloud computing environment; maintain a catalog of integrated Cloud services from the plurality of connected Clouds; and display an index of the integrated services on a user interface. Using this system and method will allow for multiple disparate services, offered by different partners, across unrelated, physically distinct Clouds to be presented as an index of integrated services.
摘要:
The present invention provides a system and method for provisioning Cloud services by establishing a Cloud services catalog using a Cloud service bus within a Cloud computing environment. In one embodiment, there is a Cloud services catalog manager configured to connect a plurality of Clouds in a Cloud computing environment; maintain a catalog of integrated Cloud services from the plurality of connected Clouds; and display an index of the integrated services on a user interface. Using this system and method will allow for multiple disparate services, offered by different partners, across unrelated, physically distinct Clouds to be presented as an index of integrated services.
摘要:
Embodiments of the present invention provide an approach for optimizing energy consumption utilized for workload processing in a networked computing environment (e.g., a cloud computing environment). Specifically, when a workload is received, an energy profile (e.g., contained in a computerized data structure) associated with the workload is identified. Typically, the energy profile identifies a set of computing resources needed to process the workload (e.g., storage requirements, server requirements, processing requirements, network bandwidth requirements, etc.), energy consumption attributes of the set of computing resources, and a proposed duration of the workload. Based on the information contained in the energy profile (and resource availability) a schedule (e.g., time, location, etc.) for processing the workload will be determined so as to optimize energy consumption associated with the processing of the workload. In a typical embodiment, the schedule will be determined such that a total cost for processing the workload can be minimized and/or to any budgeted amount/costs can be met.
摘要:
Embodiments of the present invention provide an approach for a networked computing environment (e.g., a cloud computing environment) to be dynamic in nature in that it may automatically be resized based on current/predicted workload and current/predicted resource availability. For example, when a workload is received, a data structure (e.g., a mapping) will be created on a computer storage device and populated with data related to a set of current resources of the networked computing environment that are allocated to the workload. It will then be determined whether a mismatch (e.g., a shortfall) exists between the set of current resources and resources required for processing the workload. If so, a set of peripheral resources will be identified to rectify the mismatch. The networked computing environment will then be resized to accommodate the set of peripheral resources, and the workload will be processed using the resized networked computing environment.
摘要:
Embodiments of the present invention provide an approach for forecasting a capacity available for processing a workload in a networked computing environment (e.g., a cloud computing environment). Specifically, aspects of the present invention provide service availability for cloud subscribers by forecasting the capacity available for running or scheduled applications in a networked computing environment. In one embodiment, capacity data may be collected and analyzed in real-time from a set of cloud service providers and/or peer cloud-based systems. In order to further increase forecast accuracy, historical data and forecast output may be post-processed. Data may be post-processed in a substantially continuous manner so as to assess the accuracy of previous forecasts. By factoring in actual capacity data collected after a forecast, and taking into account applications requirements as well as other factors, substantially continuous calibration of the algorithm can occur so as to improve the accuracy of future forecasts and enable functioning in a self-learning (e.g., heuristic) mode.
摘要:
Embodiments of the present invention provide an approach for a networked computing environment (e.g., a cloud computing environment) to be dynamic in nature in that it may automatically be resized based on current/predicted workload and current/predicted resource availability. For example, when a workload is received, a data structure (e.g., a mapping) will be created on a computer storage device and populated with data related to a set of current resources of the networked computing environment that are allocated to the workload. It will then be determined whether a mismatch (e.g., a shortfall) exists between the set of current resources and resources required for processing the workload. If so, a set of peripheral resources will be identified to rectify the mismatch. The networked computing environment will then be resized to accommodate the set of peripheral resources, and the workload will be processed using the resized networked computing environment.
摘要:
Embodiments of the present invention provide an approach for optimizing energy consumption utilized for workload processing in a networked computing environment (e.g., a cloud computing environment). Specifically, when a workload is received, an energy profile (e.g., contained in a computerized data structure) associated with the workload is identified. Typically, the energy profile identifies a set of computing resources needed to process the workload (e.g., storage requirements, server requirements, processing requirements, network bandwidth requirements, etc.), energy consumption attributes of the set of computing resources, and a proposed duration of the workload. Based on the information contained in the energy profile (and resource availability) a schedule (e.g., time, location, etc.) for processing the workload will be determined so as to optimize energy consumption associated with the processing of the workload. In a typical embodiment, the schedule will be determined such that a total cost for processing the workload can be minimized and/or to any budgeted amount/costs can be met.
摘要:
Embodiments of the present invention provide a subscription service for documenting, verifying, administering, and auditing use of entitled software products in third-party networked computing environments (e.g., a cloud computing environment). Specifically, aspects of the invention provide an Entitlement Brokering System (EBS) (also referred to as an entitlement broker) that reduces the risk associated with clients improperly running licensed software products on their computing infrastructure, thus increasing the reliability and auditability of the software product's entitlement status and accelerating intake of new or existing clients through automation of the entitlement verification process.
摘要:
Embodiments of the present invention provide an approach for implementing service level agreements (SLAs) having variable service delivery requirements and pricing in a networked (e.g. cloud) computing environment. Under embodiments of the present invention, a plurality of SLAs, each having a different price level, is made available to a consumer. The consumer may select one or more of the plurality of SLAs that reflects the consumer's service delivery requirements in a cloud computing environment. A consumer having relatively inflexible service delivery requirements may select one of the SLAs having a relatively higher price, whereas a consumer having relatively flexible service delivery requirements may select one of the SLAs having a relatively lower price. In one embodiment, the SLAs may dynamically provide for relatively lower variable pricing in response to the consumer receiving deferred or a relatively lower level of service during a peak service demand load. In another embodiment, the SLAs may dynamically provide for relatively higher variable pricing in response to consumer service requests that are fulfilled during a relatively higher overall service demand load. In yet another embodiment, the SLAs may dynamically provide for relatively lower variable pricing in response to consumer service requests that occur during a relatively lower overall service demand load.
摘要:
Embodiments of the present invention provide an approach for forecasting a capacity available for processing a workload in a networked computing environment (e.g., a cloud computing environment). Specifically, aspects of the present invention provide service availability for cloud subscribers by forecasting the capacity available for running or scheduled applications in a networked computing environment. In one embodiment, capacity data may be collected and analyzed in real-time from a set of cloud service providers and/or peer cloud-based systems. In order to further increase forecast accuracy, historical data and forecast output may be post-processed. Data may be post-processed in a substantially continuous manner so as to assess the accuracy of previous forecasts. By factoring in actual capacity data collected after a forecast, and taking into account applications requirements as well as other factors, substantially continuous calibration of the algorithm can occur so as to improve the accuracy of future forecasts and enable functioning in a self-learning (e.g., heuristic) mode.