On-chip control of thermal cycling

    公开(公告)号:US10049957B2

    公开(公告)日:2018-08-14

    申请号:US13040094

    申请日:2011-03-03

    IPC分类号: H01L23/34

    摘要: A method, system, and computer program product for on-chip control of thermal cycling in an integrated circuit (IC) are provided in the illustrative embodiments. A first circuit is configured on the IC for adjusting a first voltage being applied to a first part of the IC. A first temperature of the first part is measured at a first time. A determination is made that the first temperature is outside a temperature range defined by an upper temperature threshold and a lower temperature threshold. The first voltage is adjusted by reducing the first voltage when the first temperature exceeds the upper temperature threshold and by increasing the first voltage when the first temperature is below the lower temperature threshold, thereby causing the first temperature of the first part to attain a value within the temperature range.

    Application-level memory affinity control
    5.
    发明授权
    Application-level memory affinity control 失效
    应用级内存亲和度控制

    公开(公告)号:US06701421B1

    公开(公告)日:2004-03-02

    申请号:US09640541

    申请日:2000-08-17

    IPC分类号: G06F1208

    摘要: A method for allocating memory in a data processing system in which a configuration table indicative of the system's physical memory is generated following a boot event. The configuration table is then modified to identify a portion of the system's physical memory thereby hiding the remaining portion from the operating system. Subsequently, a memory allocation request is initiated by an application program. A device driver invoked by the application program then maps physical memory from the hidden portion to the application's virtual address space to satisfy the application request. The application program may be executing on a first node of a multi-node system in which each node is associated with its own local memory, In this embodiment, the node on which the allocated physical memory is located may be derived from the allocation request thereby facilitating application level, allocation of specified portions of physical memory.

    摘要翻译: 一种用于在数据处理系统中分配存储器的方法,其中在引导事件之后生成指示系统的物理存储器的配置表。 然后修改配置表以识别系统的物理存储器的一部分,从而隐藏来自操作系统的剩余部分。 随后,应用程序启动存储器分配请求。 应用程序调用的设备驱动程序然后将物理内存从隐藏部分映射到应用程序的虚拟地址空间,以满足应用程序请求。 应用程序可以在其中每个节点与其自己的本地存储器相关联的多节点系统的第一节点上执行。在该实施例中,可以从分配请求导出分配的物理存储器所在的节点 促进应用程序级别,物理内存的指定部分的分配。

    Method and system in a computer network for the reliable and consistent ordering of client requests
    6.
    发明授权
    Method and system in a computer network for the reliable and consistent ordering of client requests 失效
    计算机网络中的方法和系统可靠和一致地排序客户端请求

    公开(公告)号:US06178441B1

    公开(公告)日:2001-01-23

    申请号:US09157425

    申请日:1998-09-21

    IPC分类号: G06F1516

    摘要: A method and system for reliably and consistently delivering client requests in a computer network having at least one client connectable to one or more servers among a group of servers, wherein each server among the group of servers replicates a particular network service to ensure that the particular network service remains uninterrupted in the event of a server failure. A particular server is designated among the group of servers to manage client requests which seek to update a particular network service state, prior to any receipt of a client request which seeks to update the particular network service state by any remaining servers among the group of servers. Thereafter, an executable order is specified in which client requests which seek to update the particular network service state are processed among the remaining servers, such that the executable order, upon execution, sequences the client request which seeks to update the particular network service state with respect to all prior and subsequent client requests. The executable order and the client request which seeks to update the particular network service state are automatically transferred to the remaining servers from the particular server, in response to initiating the client request. Thereafter, the client request which seeks to update the particular network service state is processed in a tentative mode at the particular server without waiting for the executable order to be executed through to completion among the remaining servers.

    摘要翻译: 一种用于在具有可连接到一组服务器中的一个或多个服务器的至少一个客户端的计算机网络中可靠且一致地递送客户端请求的方法和系统,其中所述服务器组中的每个服务器复制特定网络服务以确保特定的 网络服务在服务器发生故障时保持不间断。 在一组服务器之间指定特定服务器以管理客户端请求,以在任何接收到客户端请求之前寻求更新特定网络服务状态,该客户端请求试图通过服务器组中的任何剩余服务器来更新特定网络服务状态 。 此后,指定可执行顺序,其中在剩余服务器中处理寻求更新特定网络服务状态的客户端请求,使得执行时的可执行顺序将寻求更新特定网络服务状态的客户端请求排序, 尊重所有之前和之后的客户端请求。 响应于发起客户端请求,寻求更新特定网络服务状态的可执行顺序和客户机请求被自动地从特定服务器传送到剩余的服务器。 此后,寻求更新特定网络服务状态的客户端请求在特定服务器处于暂时模式,而不用等待在其余服务器之间完成的可执行顺序。

    Framework for scheduling multicore processors
    7.
    发明授权
    Framework for scheduling multicore processors 有权
    多核处理器调度框架

    公开(公告)号:US08990831B2

    公开(公告)日:2015-03-24

    申请号:US13413768

    申请日:2012-03-07

    摘要: A method for a framework for scheduling tasks in a multi-core processor or multiprocessor system is provided in the illustrative embodiments. A thread is selected according to an order in a scheduling discipline, the thread being a thread of an application executing in the data processing system, the thread forming the leader thread in a bundle of threads. A value of a core attribute in a set of core attributes is determined according to a corresponding thread attribute in a set of thread attributes associated with the leader thread. A determination is made whether a second thread can be added to the bundle such that the bundle including the second thread will satisfy a policy. If the determining is affirmative, the second thread is added to the bundle. The bundle is scheduled for execution using a core of the multi-core processor.

    摘要翻译: 在说明性实施例中提供了用于在多核处理器或多处理器系统中调度任务的框架的方法。 根据调度规则中的顺序选择线程,该线程是在数据处理系统中执行的应用程序的线程,线程形成一束线程中的引导线程。 一组核心属性中的核心属性的值根据与引导线程相关联的一组线程属性中的相应线程属性来确定。 确定是否可以将第二线程添加到捆绑包,使得包括第二线程的包将满足策略。 如果确定是肯定的,则将第二个线程添加到捆绑包中。 该捆绑计划使用多核处理器的核心进行执行。

    Framework for scheduling multicore processors
    9.
    发明授权
    Framework for scheduling multicore processors 有权
    多核处理器调度框架

    公开(公告)号:US08510749B2

    公开(公告)日:2013-08-13

    申请号:US12789015

    申请日:2010-05-27

    IPC分类号: G06F3/00 G06F9/46

    摘要: A system, and computer usable program product for a framework for scheduling tasks in a multi-core processor or multiprocessor system are provided in the illustrative embodiments. A thread is selected according to an order in a scheduling discipline, the thread being a thread of an application executing in the data processing system, the thread forming the leader thread in a bundle of threads. A value of a core attribute in a set of core attributes is determined according to a corresponding thread attribute in a set of thread attributes associated with the leader thread. A determination is made whether a second thread can be added to the bundle such that the bundle including the second thread will satisfy a policy. If the determining is affirmative, the second thread is added to the bundle. The bundle is scheduled for execution using a core of the multi-core processor.

    摘要翻译: 在说明性实施例中提供了用于在多核处理器或多处理器系统中调度任务的框架的系统和计算机可用程序产品。 根据调度规则中的顺序选择线程,该线程是在数据处理系统中执行的应用程序的线程,线程形成一束线程中的引导线程。 一组核心属性中的核心属性的值根据与引导线程相关联的一组线程属性中的相应线程属性来确定。 确定是否可以将第二线程添加到捆绑包,使得包括第二线程的包将满足策略。 如果确定是肯定的,则将第二个线程添加到捆绑包中。 该捆绑计划使用多核处理器的核心进行执行。

    Accelerating recovery in MPI environments
    10.
    发明授权
    Accelerating recovery in MPI environments 失效
    加速MPI环境的恢复

    公开(公告)号:US08250405B2

    公开(公告)日:2012-08-21

    申请号:US12788990

    申请日:2010-05-27

    IPC分类号: G06F11/00

    摘要: A method and system for accelerating recovery in an MPI environment are provided in the illustrative embodiments. A first portion of a distributed application executes using a first processor and a second portion using a second processor in a distributed computing environment. After a failure of operation of the first portion, the first portion is restored to a checkpoint. A first part of the first portion is distributed to a third processor and a second part to a fourth processor. A computation of the first portion is performed using the first and the second parts in parallel. A first message is computed in the first portion and sent to the second portion, the message having been initially computed after a time of the checkpoint. A second message is replayed from the second portion without computing the second message in the second portion.

    摘要翻译: 在说明性实施例中提供了用于加速MPI环境中的恢复的方法和系统。 在分布式计算环境中,使用第一处理器和第二部分使用第二处理器执行分布式应用程序的第一部分。 在第一部分的操作失败之后,第一部分被恢复到检查点。 第一部分的第一部分被分配到第三处理器和第二部分到第四处理器。 使用第一部分和第二部分并行地执行第一部分的计算。 在第一部分中计算第一消息并将其发送到第二部分,该消息已经在检查点的时间之后被初始地计算。 从第二部分重播第二个消息,而不计算第二部分中的第二个消息。