System and method for movement of non-aligned data in network buffer model
    61.
    发明申请
    System and method for movement of non-aligned data in network buffer model 有权
    网络缓冲模型中非对齐数据移动的系统和方法

    公开(公告)号:US20060095535A1

    公开(公告)日:2006-05-04

    申请号:US10959801

    申请日:2004-10-06

    CPC classification number: H04L47/10

    Abstract: A method is provided for transferring data between first and second nodes of a network. Such method includes requesting first data to be transferred by a first upper layer protocol (ULP) operating on the first node of the network; and buffering second data for transfer to the second node by a lower protocol layer lower than the first ULP, the second data including an integral number of standard size units of data including the first data. The method further includes posting the second data to the network for delivery to the second node; receiving the second data at the second node; and from the received data, delivering the first data to a second ULP operating on the second node. The method is of particular application when transferring the data in unit size is faster than transferring the data in other than unit size.

    Abstract translation: 提供了一种用于在网络的第一和第二节点之间传送数据的方法。 这种方法包括通过在网络的第一节点上操作的第一上层协议(ULP)请求首先传送的数据; 以及缓冲第二数据以通过低于所述第一ULP的较低协议层传送到所述第二节点,所述第二数据包括包括所述第一数据的标准大小数据单元的整数。 该方法还包括将第二数据发布到网络以传送到第二节点; 在第二节点处接收第二数据; 并且从所接收的数据中,将第一数据传送到在第二节点上操作的第二ULP。 当传输单位大小的数据比传送除单位大小之外的数据更快时,该方法是特别应用的。

    Facilitating communication within shared memory environments using lock-free queues
    62.
    发明申请
    Facilitating communication within shared memory environments using lock-free queues 有权
    促进使用无锁队列的共享内存环境中的通信

    公开(公告)号:US20050283577A1

    公开(公告)日:2005-12-22

    申请号:US10874024

    申请日:2004-06-22

    CPC classification number: G06F9/526 G06F9/546

    Abstract: Lock-free queues of a shared memory environment are used to facilitate communication within that environment. The lock-free queues can be used for interprocess communication, as well as intraprocess communication. The lock-free queues are structured to minimize the use of atomic operations when performing operations on the queues, and to minimize the number of enqueue/dequeue operations to be performed on the queues.

    Abstract translation: 使用共享内存环境的无锁队列来促进该环境中的通信。 无锁队列可用于进程间通信以及进程间通信。 无锁队列的结构是在对队列执行操作时最小化原子操作的使用,并最大限度地减少队列上要执行的入队/出队操作数。

    Techniques for debugging code during runtime
    63.
    发明授权
    Techniques for debugging code during runtime 失效
    运行时调试代码的技巧

    公开(公告)号:US08607199B2

    公开(公告)日:2013-12-10

    申请号:US12639459

    申请日:2009-12-16

    CPC classification number: G06F11/3644

    Abstract: A technique for debugging code during runtime includes providing, from an outside process, a trigger to a daemon. In this case, the trigger is associated with a registered callback function. The trigger is then provided, from the daemon, to one or more designated tasks of a job. The registered callback function (that is associated with the trigger) is then executed by the one or more designated tasks. Execution results of the executed registered callback function are then returned (from the one or more designated tasks) to the daemon.

    Abstract translation: 在运行时调试代码的技术包括从外部进程提供触发器到守护程序。 在这种情况下,触发与注册的回调函数相关联。 然后,从守护程序提供触发器到作业的一个或多个指定任务。 注册的回调函数(与触发器相关联)然后由一个或多个指定的任务执行。 然后,执行的注册回调函数的执行结果(从一个或多个指定的任务)返回到守护程序。

    Creating A Checkpoint Of A Parallel Application Executing In A Parallel Computer That Supports Computer Hardware Accelerated Barrier Operations
    64.
    发明申请
    Creating A Checkpoint Of A Parallel Application Executing In A Parallel Computer That Supports Computer Hardware Accelerated Barrier Operations 审中-公开
    创建支持计算机硬件加速屏障操作的并行计算机中并行应用程序的检查点

    公开(公告)号:US20130247069A1

    公开(公告)日:2013-09-19

    申请号:US13420676

    申请日:2012-03-15

    CPC classification number: G06F9/522 G06F9/542 G06F11/1402 G06F11/1438

    Abstract: In a parallel computer executing a parallel application, where the parallel computer includes a number of compute nodes, with each compute node including one or more computer processors, the parallel application including a number of processes, and one or more of the processes executing a barrier operation, creating a checkpoint of a parallel application includes: maintaining, by each computer processor, global barrier operation state information, the global barrier operation state information includes an aggregation of each process's barrier operation state information; invoking, for each process of the parallel application, a checkpoint handler; saving, by each process's checkpoint handler as part of a checkpoint for the parallel application, the process's barrier operation state information; and exiting, by each process, the checkpoint handler.

    Abstract translation: 在执行并行应用的并行计算机中,其中并行计算机包括多个计算节点,每个计算节点包括一个或多个计算机处理器,并行应用程序包括多个进程,以及执行屏障的一个或多个进程 创建并行应用的检查点的操作包括:由每个计算机处理器维护全局障碍操作状态信息,全局障碍操作状态信息包括每个进程的屏障操作状态信息的聚合; 对并行应用程序的每个进程调用一个检查点处理程序; 由每个进程的检查点处理程序作为并行应用程序的检查点的一部分保存进程的屏障操作状态信息; 并通过每个进程退出检查点处理程序。

    Collective acceleration unit tree flow control and retransmit
    65.
    发明授权
    Collective acceleration unit tree flow control and retransmit 失效
    集体加速单位树流控制和重传

    公开(公告)号:US08417778B2

    公开(公告)日:2013-04-09

    申请号:US12640208

    申请日:2009-12-17

    CPC classification number: G06F15/16

    Abstract: A mechanism is provided for collective acceleration unit tree flow control forms a logical tree (sub-network) among those processors and transfers “collective” packets on this tree. The system supports many collective trees, and each collective acceleration unit (CAU) includes resources to support a subset of the trees. Each CAU has limited buffer space, and the connection between two CAUs is not completely reliable. Therefore, to address the challenge of collective packets traversing on the tree without colliding with each other for buffer space and guaranteeing the end-to-end packet delivery, each CAU in the system effectively flow controls the packets, detects packet loss, and retransmits lost packets.

    Abstract translation: 提供了一种用于集体加速单元树流控制的机制,形成这些处理器之间的逻辑树(子网),并在该树上传送集合分组。 系统支持许多集体树,每个集体加速单元(CAU)包括支持一部分树的资源。 每个CAU具有有限的缓冲区空间,两个CAU之间的连接不是完全可靠的。 因此,为了解决在树上遍历的集合分组的挑战,不会相互冲突,保证端到端的分组传递,系统中的每个CAU都有效地流量控制分组,检测分组丢失,重传丢失 数据包

    Reporting of partially performed memory move
    66.
    发明授权
    Reporting of partially performed memory move 有权
    报告部分执行内存移动

    公开(公告)号:US08356151B2

    公开(公告)日:2013-01-15

    申请号:US12024504

    申请日:2008-02-01

    CPC classification number: G06F9/30043 G06F12/0831 G06F12/0862 G06F12/1027

    Abstract: A method performed in a data processing system initiates an asynchronous memory move (AMM) operation, whereby a processor performs a move of data in virtual address space from a first effective address to a second effective address and forwards parameters of the AMM operation to asynchronous memory mover logic for completion of the physical movement of data from a first memory location to a second memory location. The processor executes a second operation, which checks a status of the completion of the data move and returns a notification indicating the status. The notification indicates a status, which includes one of: data move in progress; data move totally done; data move partially done; data move cannot be performed; and occurrence of a translation look-aside buffer invalidate entry (TLBIE) operation. The processor initiates one or more actions in response to the notification received.

    Abstract translation: 在数据处理系统中执行的方法启动异步存储器移动(AMM)操作,由此处理器执行将虚拟地址空间中的数据从第一有效地址移动到第二有效地址,并将AMM操作的参数转发到异步存储器 用于完成数据从第一存储器位置到第二存储器位置的物理移动的移动器逻辑。 处理器执行第二操作,其检查数据移动完成的状态,并返回指示状态的通知。 该通知表示状态,其中包括:数据移动进行中的一个; 数据移动完成; 数据移动部分完成; 无法执行数据移动; 以及出现翻译后备缓冲区无效条目(TLBIE)操作。 处理器响应于收到的通知发起一个或多个动作。

    Mechanisms for communicating with an asynchronous memory mover to perform AMM operations
    67.
    发明授权
    Mechanisms for communicating with an asynchronous memory mover to perform AMM operations 有权
    与异步存储器移动器进行通信以执行AMM操作的机制

    公开(公告)号:US08245004B2

    公开(公告)日:2012-08-14

    申请号:US12024560

    申请日:2008-02-01

    CPC classification number: G06F9/30032 G06F12/0831 G06F12/0862 G06F12/10

    Abstract: A data processing system includes a set of architected registers within which the processor places state and other information to communicate with the asynchronous memory mover in order to initiate and control an AMM operation. The asynchronous memory mover performs an asynchronous memory move (AMM) operation in response to receiving a set of parameters within the architected registers, which parameters are associated with an AMM store instruction executed by the processor to initiates a move of data in virtual space before placing the information in the architected registers. The architected registers are processor architected registers, defined on a per thread basis by a compiler, or memory mapped architected registers allocated for communicating with the asynchronous memory mover during a bind and subsequent execution of an application. The architected registers are also utilized to store state information to enable a restore to a point before execution of the AMM operation.

    Abstract translation: 数据处理系统包括一组架构寄存器,处理器将处理器置于与异步存储器移动器通信的状态和其它信息,以启动和控制AMM操作。 异步存储器移动器响应于在结构化寄存器内接收到一组参数而执行异步存储器移动(AMM)操作,哪些参数与处理器执行的AMM存储指令相关联,以在放置之前启动虚拟空间中的数据移动 建筑登记册中的信息。 架构寄存器是由编译器在每个线程基础上定义的处理器架构寄存器,或者在应用程序的绑定和后续执行期间分配用于与异步存储器移动器进行通信的内存映射架构寄存器。 架构寄存器还用于存储状态信息,以便在执行AMM操作之前恢复到一个点。

    Guaranteeing delivery of multi-packet GSM messages
    68.
    发明授权
    Guaranteeing delivery of multi-packet GSM messages 失效
    保证多分组GSM消息的传送

    公开(公告)号:US08146094B2

    公开(公告)日:2012-03-27

    申请号:US12024678

    申请日:2008-02-01

    CPC classification number: H04L1/1642 G06F9/542

    Abstract: A target task ensures complete delivery of a global shared memory (GSM) message from an originating task to the target task. The target task's HFI receives a first of multiple GSM packets generated from a single GSM message sent from the originating task. The HFI logic assigns a sequence number and corresponding tuple to track receipt of the complete GSM message. The sequence number is unique relative to other sequence numbers assigned to GSM messages that have not been completely received from the initiating task. The HFI updates a count value within the tuple, which comprises the sequence number and the count value for the first GSM packet and for each subsequent GSM packet received for the GSM message. The HFI determines when receipt of the GSM message is complete by comparing the count value with a count total retrieved from the packet header.

    Abstract translation: 目标任务确保从始发任务到目标任务的全局共享存储器(GSM)消息的完全传递。 目标任务的HFI接收从发起任务发送的单个GSM消息产生的多个GSM分组中的第一个。 HFI逻辑分配序列号和对应的元组来跟踪完整GSM消息的接收。 相对于分配给尚未完全从发起任务接收的GSM消息的其他序列号,序列号是唯一的。 HFI更新元组内的计数值,其包括第一GSM分组的序列号和计数值以及为GSM消息接收的每个后续GSM分组。 通过将计数值与从分组报头检索的计数总数进行比较,HFI确定接收到GSM消息的完成。

    Managing a region cache
    69.
    发明授权
    Managing a region cache 失效
    管理区域缓存

    公开(公告)号:US08135911B2

    公开(公告)日:2012-03-13

    申请号:US12255180

    申请日:2008-10-21

    CPC classification number: G06F12/0864

    Abstract: A method, system, and computer program product are provided for managing a cache. A region to be stored within the cache is received. The cache includes multiple regions and each of the regions is defined by memory ranges having a starting index and an ending index. The region that has been received is stored in the cache in accordance with a cache invariant. The cache invariant guarantees that at any given point in time the regions in the cache are stored in a given order and none of the regions are completely contained within any other of the regions.

    Abstract translation: 提供了一种用于管理高速缓存的方法,系统和计算机程序产品。 接收要存储在高速缓存内的区域。 高速缓存包括多个区域,并且每个区域由具有开始索引和结束索引的存储器范围定义。 已经接收到的区域根据高速缓存不变量被存储在高速缓存中。 高速缓存不变量保证在任何给定的时间点,高速缓存中的区域以给定的顺序存储,并且没有一个区域完全包含在任何其他区域内。

    FLOW CONTROL FOR RELIABLE MESSAGE PASSING
    70.
    发明申请
    FLOW CONTROL FOR RELIABLE MESSAGE PASSING 失效
    流量控制可靠的信息传递

    公开(公告)号:US20120023304A1

    公开(公告)日:2012-01-26

    申请号:US12841399

    申请日:2010-07-22

    Abstract: A message flow controller limits a process from passing a new message in a reliable message passing layer from a source node to at least one destination node while a total number of in-flight messages for the process meets a first level limit. The message flow controller limits the new message from passing from the source node to a particular destination node from among a plurality of destination nodes while a total number of in-flight messages to the particular destination node meets a second level limit. Responsive to the total number of in-flight messages to the particular destination node not meeting the second level limit, the message flow controller only sends a new packet from among at least one packet for the new message to the particular destination node while a total number of in-flight packets for the new message is less than a third level limit.

    Abstract translation: 消息流控制器限制过程将可靠的消息传递层中的新消息从源节点传递到至少一个目的地节点,而该过程的总计数量的飞行中消息满足第一级限制。 消息流控制器限制新消息从多个目的地节点之间的从源节点传递到特定目的地节点,而到特定目的地节点的飞行中消息的总数满足第二级限制。 响应于不符合第二级限制的特定目的地节点的飞行中消息的总数,消息流控制器仅从新消息的至少一个分组中向特定目的地节点发送新分组,而总数 的新消息的飞行中数据包小于第三级限制。

Patent Agency Ranking