On demand application scheduling in a heterogeneous workload environment
    1.
    发明申请
    On demand application scheduling in a heterogeneous workload environment 审中-公开
    异构工作负载环境中的按需应用调度

    公开(公告)号:US20070180453A1

    公开(公告)日:2007-08-02

    申请号:US11340937

    申请日:2006-01-27

    IPC分类号: G06F9/46

    摘要: Embodiments of the present invention address deficiencies of the art in respect to deploying heterogeneous workloads in separate resource pools and provide a method, system and computer program product for on-demand application scheduling in a heterogeneous environment. In one embodiment of the invention, a method for balancing nodal allocations in a resource pool common to both transactional workloads and long running workloads can include parsing a service policy for both transactional workloads and also long running workloads. An allocation of nodes for a common resource pool for the transactional and long running workloads can be determined to balance performance requirements for the transactional workloads and long running workloads specified by the service policy. Subsequently, the determined allocation can be applied to the common resource pool.

    摘要翻译: 本发明的实施例解决了在单独的资源池中部署异构工作负载的本领域的不足之处,并提供了用于异构环境中的按需应用调度的方法,系统和计算机程序产品。 在本发明的一个实施例中,用于平衡事务工作负载和长时间运行的工作负载共同的资源池中的节点分配的方法可以包括解析事务工作负载的服务策略以及长时间运行的工作负载。 可以确定用于事务和长时间运行的工作负载的公共资源池的节点分配,以平衡业务策略指定的事务工作负载和长时间运行的工作负载的性能要求。 随后,所确定的分配可以应用于公共资源池。

    Autonomic workload classification using predictive assertion for wait queue and thread pool selection
    3.
    发明申请
    Autonomic workload classification using predictive assertion for wait queue and thread pool selection 有权
    自动工作负载分类,使用等待队列和线程池选择的预测性断言

    公开(公告)号:US20050183084A1

    公开(公告)日:2005-08-18

    申请号:US10778584

    申请日:2004-02-13

    IPC分类号: G06F9/46

    CPC分类号: G06F9/505

    摘要: Incoming work units (e.g., requests) in a computing workload are analyzed and classified according to predicted execution. Preferred embodiments track which instrumented wait points are encountered by the executing work units, and this information is analyzed to dynamically and autonomically create one or more recognizers to programmatically recognize similar, subsequently-received work units. When a work unit is recognized, its execution behavior is then predicted. Execution resources are then allocated to the work units in view of these predictions. The recognizers may be autonomically evaluated or tuned, thereby adjusting to changing workload characteristics. The disclosed techniques may be used advantageously in application servers, message-processing software, and so forth.

    摘要翻译: 计算工作负载中的传入工作单元(例如,请求)根据预测的执行情况进行分析和分类。 优选实施例跟踪执行工作单元遇到哪些仪器化等待点,并且分析该信息以动态地和自动地创建一个或多个识别器以编程地识别相似的随后接收的工作单元。 当工作单元被识别时,其执行行为被预测。 鉴于这些预测,执行资源被分配给工作单位。 可以自动评估或调整识别器,从而适应不断变化的工作负载特性。 所公开的技术可以有利地用于应用服务器,消息处理软件等。

    Context key routing for parallel processing in an application serving environment
    4.
    发明申请
    Context key routing for parallel processing in an application serving environment 审中-公开
    用于应用程序服务环境中并行处理的上下文密钥路由

    公开(公告)号:US20070157212A1

    公开(公告)日:2007-07-05

    申请号:US11325151

    申请日:2006-01-04

    IPC分类号: G06F9/46

    CPC分类号: G06F9/546 G06F9/544

    摘要: In alternate embodiments, the invention is a message-passing process for routing communications between a transmitting parallel process and a receiving parallel process executing in an application server environment, or a machine or computer-readable memory having the message-passing process programmed therein, the message-passing process comprising: linking a context key to an addressable computing resource in the application server environment; linking the receiving parallel process to the context key; receiving a communication from the transmitting parallel process, wherein the communication transmits the context key; and routing the communication to the addressable computing resource linked to the context key.

    摘要翻译: 在替代实施例中,本发明是用于在发送并行进程和在应用服务器环境中执行的接收并行进程之间路由通信的消息传递过程,或者具有其中编程的消息传递处理的机器或计算机可读存储器, 消息传递过程包括:将上下文密钥链接到应用服务器环境中的可寻址计算资源; 将接收并行进程链接到上下文密钥; 从所述发送并行处理接收通信,其中所述通信发送所述上下文密钥; 以及将所述通信路由到与所述上下文密钥相关联的所述可寻址计算资源。

    Algorithm for Minimizing Rebate Value Due to SLA Breach in a Utility Computing Environment
    5.
    发明申请
    Algorithm for Minimizing Rebate Value Due to SLA Breach in a Utility Computing Environment 失效
    在效用计算环境中由于SLA违约而使回扣价值最小化的算法

    公开(公告)号:US20060085544A1

    公开(公告)日:2006-04-20

    申请号:US10711981

    申请日:2004-10-18

    IPC分类号: G06F15/173

    CPC分类号: G06F9/505

    摘要: The invention described is a new and useful process for minimizing the overall rebate a provider disburses to customers when a service level agreement (SLA) breach occurs in a utility computing environment. Specifically, the process compares performance data and resource usage with the SLAs of the customers, and reallocates shared resources to those customers who represent a lesser penalty to the provider in the event of an SLA breach. The process determines which resources, used by customers representing the lesser penalty, are operating below peak capacity. The process then reallocates these under-utilized resources to those customers requiring additional resources to meet SLA thresholds. If all resources are operating at peak capacity, the process reallocates the resources to those customers whose SLAs represent a greater penalty in the event of an SLA breach as compared to those customers whose SLAs provide for a lesser penalty, thereby minimizing the total rebate due upon an SLA breach.

    摘要翻译: 所描述的发明是在公用事业计算环境中发生服务水平协议(SLA)违约时,最小化提供商向客户支付总体回扣的新的和有用的过程。 具体来说,该过程将性能数据和资源使用与客户的SLA进行比较,并将共享资源重新分配给在SLA违规情况下对提供商造成较小惩罚的客户。 该过程确定哪些资源用于代表较小的惩罚的运营商低于最高容量。 然后,该过程将这些未充分利用的资源重新分配给需要额外资源以满足SLA阈值的客户。 如果所有资源都以最高的容量运行,那么流程将资源重新分配给那些SLA在SLA违规发生时比SLA提供较小惩罚的客户的SLA代表更大的罚款的客户,从而最大限度地减少由于 SLA违规。