Abstract:
A partition analyzer may be configured to designate a data partition within a database of a grid network, and to perform a mapping of the data partition to a task of an application, the application to be at least partially executed within the grid network. A provisioning manager may be configured to determine a task instance of the task, and to determine the data partition, based on the mapping, where the data partition may be stored at an initial node of the grid network. A processing node of the grid network having processing resources required to execute the task instance and a data node of the grid network having memory resources required to store the data partition may be determined. The task instance may be deployed to the processing node, and the data partition may be re-located from the initial node to the data node, based on the comparison.
Abstract:
A partition analyzer may be configured to designate a data partition within a database of a grid network, and to perform a mapping of the data partition to a task of an application, the application to be at least partially executed within the grid network. A provisioning manager may be configured to determine a task instance of the task, and to determine the data partition, based on the mapping, where the data partition may be stored at an initial node of the grid network. A processing node of the grid network having processing resources required to execute the task instance and a data node of the grid network having memory resources required to store the data partition may be determined. The task instance may be deployed to the processing node, and the data partition may be re-located from the initial node to the data node, based on the comparison.
Abstract:
A method and system are described for estimating resource provisioning. An example method may include obtaining a workflow path including an external invocation node and respective groups of service nodes, node connectors, and hardware nodes, and including a directed ordered path indicating ordering of a flow of execution of services associated with the service nodes, from the external invocation node, to a hardware node, determining an indicator of a service node workload based on attribute values associated with a service node and an indicator of a propagated workload based on combining attribute values associated with the external invocation node and other service nodes or node connectors preceding the service node in the workflow path based on the ordering, and provisioning the service node onto a hardware node based on combining the indicator of the service node workload and an indicator of a current resource demand associated with the hardware node.
Abstract:
The dynamic resolution of dependent components of a plug-in including, during a runtime of an application, dynamically accessing, for a plug-in invoked by the application, a manifest listing classes capable of providing an interface for the plug-in, and dependent components that provide functionality to the plug-in, and dynamically instantiating a class instance of at least one of the listed classes. Furthermore, the process includes dynamically resolving the listed dependent components, and dynamically loading the plug-in.
Abstract:
User interfaces are described for modeling estimations of resource provisioning. An example user interface may request a display of graphical indicators associated with nodes and edges, request a determination of an indicator of a service node workload associated with a service node included in a workflow path based on attribute values associated with the service node and an indicator of a propagated workload, and request provisioning of service nodes onto hardware nodes. The nodes may include external invocation nodes, service nodes, and hardware nodes, and the edges may include node connectors. An indication of an arrangement of an external invocation node, a group of service nodes, a group of node connectors, and a group of hardware nodes may be received, wherein the arrangement may be configured by a user interacting with the displayed graphical indicators, and may represent a workflow path.
Abstract:
A method and system are described for estimating resource provisioning. An example method may include obtaining a workflow path including an external invocation node and respective groups of service nodes, node connectors, and hardware nodes, and including a directed ordered path indicating ordering of a flow of execution of services associated with the service nodes, from the external invocation node, to a hardware node, determining an indicator of a service node workload based on attribute values associated with a service node and an indicator of a propagated workload based on combining attribute values associated with the external invocation node and other service nodes or node connectors preceding the service node in the workflow path based on the ordering, and provisioning the service node onto a hardware node based on combining the indicator of the service node workload and an indicator of a current resource demand associated with the hardware node.
Abstract:
User interfaces are described for modeling estimations of resource provisioning. An example user interface may request a display of graphical indicators associated with nodes and edges, request a determination of an indicator of a service node workload associated with a service node included in a workflow path based on attribute values associated with the service node and an indicator of a propagated workload, and request provisioning of service nodes onto hardware nodes. The nodes may include external invocation nodes, service nodes, and hardware nodes, and the edges may include node connectors. An indication of an arrangement of an external invocation node, a group of service nodes, a group of node connectors, and a group of hardware nodes may be received, wherein the arrangement may be configured by a user interacting with the displayed graphical indicators, and may represent a workflow path.