Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    公开(公告)号:US09075759B2

    公开(公告)日:2015-07-07

    申请号:US12940282

    申请日:2010-11-05

    摘要: Fencing direct memory access (‘DMA’) data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

    Managing internode data communications for an uninitialized process in a parallel computer
    4.
    发明授权
    Managing internode data communications for an uninitialized process in a parallel computer 有权
    管理并行计算机中未初始化进程的节点间数据通信

    公开(公告)号:US08732725B2

    公开(公告)日:2014-05-20

    申请号:US13292293

    申请日:2011-11-09

    摘要: A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

    摘要翻译: 并行计算机包括各自具有主存储器和消息传送单元(MU)的节点。 每个MU包括计算机存储器,其又包括MU消息缓冲器。 每个MU消息缓冲区与计算节点上的未初始化进程相关联。 在并行计算机中,管理未初始化过程的节点间数据通信包括:由计算节点的MU接收与计算节点上的未初始化过程相关联的MU消息缓冲器中的一个或多个数据通信消息; 由应用代理确定与未初始化过程相关联的MU消息缓冲器在未初始化过程的初始化之前已满; 由应用代理建立用于主计算机存储器中未初始化过程的临时消息缓冲器; 并且由应用代理将与未初始化过程相关联的MU消息缓冲器的数据通信消息移动到主计算机存储器中的临时消息缓冲器。

    Utilizing A Kernel Administration Hardware Thread Of A Multi-Threaded, Multi-Core Compute Node Of A Parallel Computer
    5.
    发明申请
    Utilizing A Kernel Administration Hardware Thread Of A Multi-Threaded, Multi-Core Compute Node Of A Parallel Computer 有权
    利用并行计算机的多线程,多核心计算节点的内核管理硬件线程

    公开(公告)号:US20140047450A1

    公开(公告)日:2014-02-13

    申请号:US13569275

    申请日:2012-08-08

    IPC分类号: G06F9/46

    CPC分类号: G06F9/544

    摘要: Methods, apparatuses, and computer program products for utilizing a kernel administration hardware thread of a multi-threaded, multi-core compute node of a parallel computer are provided. Embodiments include a kernel assigning a memory space of a hardware thread of an application processing core to a kernel administration hardware thread of a kernel processing core. A kernel administration hardware thread is configured to advance the hardware thread to a next memory space associated with the hardware thread in response to the assignment of the kernel administration hardware thread to the memory space of the hardware thread. Embodiments also include the kernel administration hardware thread executing an instruction within the assigned memory space.

    摘要翻译: 提供了用于并行计算机的多线程,多核计算节点的内核管理硬件线程的方法,装置和计算机程序产品。 实施例包括将应用处理核心的硬件线程的存储器空间分配给内核处理核心的内核管理硬件线程的内核。 内核管理硬件线程被配置为响应于将内核管理硬件线程分配给硬件线程的存储器空间而将硬件线程推进到与硬件线程相关联的下一个存储器空间。 实施例还包括内核管理硬件线程执行分配的存储器空间内的指令。

    Compiling software for a hierarchical distributed processing system
    6.
    发明授权
    Compiling software for a hierarchical distributed processing system 失效
    为分层分布式处理系统编写软件

    公开(公告)号:US08621446B2

    公开(公告)日:2013-12-31

    申请号:US12770353

    申请日:2010-04-29

    IPC分类号: G06F9/45

    CPC分类号: G06F8/45

    摘要: Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more other nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendants; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendant.

    摘要翻译: 编译用于分级分布式处理系统的软件,包括向一个或多个编译节点提供待编译的软件,其中待编译的软件的至少一部分将被一个或多个其他节点执行; 由编译节点编译软件; 由编译节点维护要在编译节点上执行的任何编译软件; 根据编辑的软件是否针对所选择的节点或所选择的节点的后代,由编译节点选择分布式处理系统的层级的下一层中的一个或多个节点; 将所选择的节点或所选节点的后代发送到所选择的节点仅由编译的软件执行。

    Generating an executable version of an application using a distributed compiler operating on a plurality of compute nodes
    7.
    发明授权
    Generating an executable version of an application using a distributed compiler operating on a plurality of compute nodes 有权
    使用在多个计算节点上运行的分布式编译器生成应用程序的可执行版本

    公开(公告)号:US08495603B2

    公开(公告)日:2013-07-23

    申请号:US12189336

    申请日:2008-08-11

    IPC分类号: G06F9/44

    CPC分类号: G06F9/54 G06F8/443

    摘要: Methods, apparatus, and products are disclosed for generating an executable version of an application using a distributed compiler operating on a plurality of compute nodes that include: receiving, by each compute node, a portion of source code for an application; compiling, in parallel by each compute node, the portion of the source code received by that compute node into a portion of object code for the application; performing, in parallel by each compute node, inter-procedural analysis on the portion of the object code of the application for that compute node, including sharing results of the inter-procedural analysis among the compute nodes; optimizing, in parallel by each compute node, the portion of the object code of the application for that compute node using the shared results of the inter-procedural analysis; and generating the executable version of the application in dependence upon the optimized portions of the object code of the application.

    摘要翻译: 公开了用于使用在多个计算节点上操作的分布式编译器来生成应用程序的可执行版本的方法,装置和产品,其包括:由每个计算节点接收应用程序的一部分源代码; 由每个计算节点并行地将由该计算节点接收的源代码的部分编译成应用的目标代码的一部分; 由每个计算节点并行执行对该计算节点的应用的目标代码的部分的程序间分析,包括在计算节点之间共享过程间分析的结果; 使用所述程序间分析的共同结果,利用所述计算节点并行地优化所述计算节点的所述应用的所述目标代码的部分; 以及根据应用程序的目标代码的优化部分生成应用程序的可执行版本。

    Internode Data Communications In A Parallel Computer
    8.
    发明申请
    Internode Data Communications In A Parallel Computer 失效
    并行计算机中的国际数据通信

    公开(公告)号:US20130117764A1

    公开(公告)日:2013-05-09

    申请号:US13290642

    申请日:2011-11-07

    IPC分类号: G06F9/46

    CPC分类号: G06F9/544

    摘要: Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    摘要翻译: 并行计算机中的国际数据通信包括计算节点,每个计算节点包括主存储器和消息传送单元,消息传送单元包括计算机存储器和耦合用于数据通信的计算节点,其中针对计算节点启动时的每个计算节点:消息 单元在消息接发单元的计算机存储器中分配预定数量的消息缓冲器,每个消息缓冲器与在计算节点上被初始化的进程相关联; 在计算节点上的特定进程的初始化之前接收用于该特定进程的数据通信消息; 并将数据通信消息存储在与特定进程相关联的消息缓冲器中。 在特定进程的初始化时,该过程在计算节点的主存储器中建立消息缓存器,并将数据通信消息从消息传送单元的消息缓冲器复制到主存储器的消息缓冲器中。

    Performing A Local Barrier Operation
    9.
    发明申请
    Performing A Local Barrier Operation 失效
    执行局部屏障操作

    公开(公告)号:US20130042254A1

    公开(公告)日:2013-02-14

    申请号:US13206590

    申请日:2011-08-10

    IPC分类号: G06F9/52

    CPC分类号: G06F9/54 G06F9/522

    摘要: Performing a local barrier operation with parallel tasks executing on a compute node including, for each task: retrieving a present value of a counter; calculating, in dependence upon the present value of the counter and a total number of tasks performing the local barrier operation, a base value of the counter, the base value representing the counter's value prior to any task joining the local barrier; calculating, in dependence upon the base value and the total number of tasks performing the local barrier operation, a target value of the counter, the target value representing the counter's value when all tasks have joined the local barrier; joining the local barrier, including atomically incrementing the value of the counter; and repetitively, until the present value of the counter is no less than the target value of the counter: retrieving the present value of the counter and determining whether the present value equals the target value.

    摘要翻译: 对于每个任务,执行在计算节点上执行并行任务的局部屏障操作,包括:检索计数器的当前值; 根据计数器的当前值和执行局部屏障操作的任务的总数,计算计数器的基值,表示在任务加入局部屏障之前的计数器值的基值; 根据基本值和执行局部屏障操作的任务的总数,计算计数器的目标值,当所有任务已经加入局部屏障时计算表示计数器值的目标值; 加入当地的障碍,包括原子地增加柜台的价值; 并且重复地,直到计数器的当前值不小于计数器的目标值:检索计数器的当前值并确定当前值是否等于目标值。

    Performing A Local Reduction Operation On A Parallel Computer
    10.
    发明申请
    Performing A Local Reduction Operation On A Parallel Computer 失效
    在并行计算机上执行局部缩减操作

    公开(公告)号:US20120317399A1

    公开(公告)日:2012-12-13

    申请号:US13585993

    申请日:2012-08-15

    CPC分类号: G06F15/17387 G06F15/17318

    摘要: A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

    摘要翻译: 并行计算机包括计算节点,每个包括两个减少处理核心,一个网络写入处理核心和一个网络读取处理核心,每个处理核心分配一个输入缓冲器。 通过缩小处理核心在交织块中将缩小处理核心的输入缓冲器的内容复制到共享存储器中的交错缓冲器; 通过一个还原处理核心将网络写处理核心的输入缓冲器的内容复制到共享存储器; 通过另一个还原处理核心将网络读处理核心的输入缓冲器的内容复制到共享存储器; 并通过还原处理核心并行减少:还原处理核心的输入缓冲器的内容; 交错缓冲器的每隔一个交错块; 复制内容的网络写入处理核心的输入缓冲区; 以及网络读取处理核心的输入缓冲区的复制内容。