Mechanism to prevent illegal access to task address space by unauthorized tasks
    2.
    发明授权
    Mechanism to prevent illegal access to task address space by unauthorized tasks 有权
    通过未经授权的任务防止非法访问任务地址空间的机制

    公开(公告)号:US08275947B2

    公开(公告)日:2012-09-25

    申请号:US12024410

    申请日:2008-02-01

    CPC classification number: G06F9/544 G06F9/468

    Abstract: A method and data processing system for tracking global shared memory (GSM) operations to and from a local node configured with a host fabric interface (HFI) coupled to a network fabric. During task/job initialization, the system OS assigns HFI window(s) to handle the GSM packet generation and GSM packet receipt and processing for each local task. HFI processing logic automatically tags each GSM packet generated by the HFI window with a global job identifier (ID) of the job to which the local task is affiliated. The job ID is embedded within each GSM packet placed on the network fabric. On receipt of a GSM packet from the network fabric, the HFI logic retrieves the embedded job ID and compares the embedded job ID with the ID within the HFI window(s). GSM packets are forwarded to an HFI window only when the embedded job ID matches the HFI window's job ID.

    Abstract translation: 一种用于跟踪与配置有耦合到网络结构的主机结构接口(HFI)的本地节点的全局共享存储器(GSM)操作的方法和数据处理系统。 在任务/作业初始化期间,系统OS分配HFI窗口来处理每个本地任务的GSM分组生成和GSM分组接收和处理。 HFI处理逻辑自动将由HFI窗口生成的每个GSM分组标记为本地任务附属于该作业的全局作业标识符(ID)。 作业ID被嵌入到放置在网络结构上的每个GSM分组内。 在从网络结构接收到GSM分组时,HFI逻辑检索嵌入的作业ID,并将嵌入的作业ID与HFI窗口内的ID进行比较。 仅当嵌入的作业ID与HFI窗口的作业ID匹配时,才将GSM数据包转发到HFI窗口。

    Generating and issuing global shared memory operations via a send FIFO
    3.
    发明授权
    Generating and issuing global shared memory operations via a send FIFO 有权
    通过发送FIFO生成和发出全局共享内存操作

    公开(公告)号:US08200910B2

    公开(公告)日:2012-06-12

    申请号:US12024664

    申请日:2008-02-01

    CPC classification number: G06F9/544

    Abstract: A method for issuing global shared memory (GSM) operations from an originating task on a first node coupled to a network fabric of a distributed network via a host fabric interface (HFI). The originating task generates a GSM command within an effective address (EA) space. The task then places the GSM command within a send FIFO. The send FIFO is a portion of real memory having real addresses (RA) that are memory mapped to EAs of a globally executing job. The originating task maintains a local EA-to-RA mapping of only a portion of the real address space of the globally executing job. The task enables the HFI to retrieve the GSM command from the send FIFO into an HFI window allocated to the originating task. The HFI window generates a corresponding GSM packet containing GSM operations and/or data, and the HFI window issues the GSM packet to the network fabric.

    Abstract translation: 一种用于通过主机结构接口(HFI)从耦合到分布式网络的网络结构的第一节点上的始发任务发出全局共享存储器(GSM)操作的方法。 始发任务在有效地址(EA)空间内生成GSM命令。 然后任务将GSM命令放在发送FIFO中。 发送FIFO是具有存储器映射到全局执行作业的EA的实际地址(RA)的实际存储器的一部分。 始发任务维护仅全局执行作业的实际地址空间的一部分的本地EA到RA映射。 该任务使HFI能够将GSM命令从发送FIFO检索到分配给始发任务的HFI窗口中。 HFI窗口产生包含GSM操作和/或数据的相应的GSM分组,并且HFI窗口向网络结构发出GSM分组。

    Managing preemption in a parallel computing system
    4.
    发明授权
    Managing preemption in a parallel computing system 失效
    在并行计算系统中管理抢占

    公开(公告)号:US08141084B2

    公开(公告)日:2012-03-20

    申请号:US12098868

    申请日:2008-04-07

    CPC classification number: G06F9/5022 G06F9/5077

    Abstract: This present invention provides a portable user space application release/reacquire of adapter resources for a given job on a node using information in a network resource table. The information in the network resource table is obtained when a user space application is loaded by some resource manager. The present invention provides a portable solution that will work for any interconnect where adapter resources need to be freed and reacquired without having to write a specific function in the device driver. In the present invention, the preemption request is done on a job basis using a key or “job key” that was previously loaded when the user space application or job originally requested the adapter resources. This is done for each OS instance where the job is run.

    Abstract translation: 本发明提供了使用网络资源表中的信息的便携式用户空间应用程序释放/重新获取节点上的给定作业的适配器资源。 当某个资源管理器加载用户空间应用程序时,获取网络资源表中的信息。 本发明提供一种便携式解决方案,其可适用于任何互连,其中适配器资源需要被释放并重新获取,而不必在设备驱动器中写入特定功能。 在本发明中,使用当用户空间应用或作业最初请求适配器资源时先前加载的密钥或“作业密钥”,在作业基础上完成抢占请求。 这是为运行作业的每个操作系统实例完成的。

    Method for enabling direct prefetching of data during asychronous memory move operation
    5.
    发明授权
    Method for enabling direct prefetching of data during asychronous memory move operation 失效
    用于在异步存储器移动操作期间直接预取数据的方法

    公开(公告)号:US07921275B2

    公开(公告)日:2011-04-05

    申请号:US12024598

    申请日:2008-02-01

    Abstract: While an asynchronous memory move (AMM) operation is ongoing, a prefetch request for data from the source effective address or the destination effective address triggers cache injection by the AMM mover of relevant data from the stream of data being moved in the physical memory. The memory controller forwards the first prefetched line to the prefetch engine and L1 cache, the next cache lines in the sequence of data to the L2 cache, and a subsequent set of cache lines to the L3 cache. The memory controller then forwards the remaining data to the destination memory location. Quick access to prefetch data is enabled by buffering the stream of data in the upper caches rather than placing all the moved data within the memory. Also, the memory controller places moved data into only a subset of the available cache lines of the upper level cache.

    Abstract translation: 当异步存储器移动(AMM)操作正在进行时,来自源有效地址或目的地有效地址的数据的预取请求触发AMM移动器对来自物理存储器中移动的数据流的相关数据的高速缓存注入。 存储器控制器将第一预取行转发到预取引擎和L1高速缓存,将数据序列中的下一个高速缓存行转发到L2高速缓存,以及将后续的一组高速缓存行转发到L3高速缓存。 存储器控制器然后将剩余的数据转发到目的地存储器位置。 通过缓存高速缓存中的数据流,而不是将所有移动的数据放在内存中,可以快速访问预取数据。 此外,存储器控制器将移动的数据仅放置在高级缓存的可用高速缓存行的子集中。

    MANAGING PREEMPTION IN A PARALLEL COMPUTING SYSTEM
    6.
    发明申请
    MANAGING PREEMPTION IN A PARALLEL COMPUTING SYSTEM 失效
    管理平行计算系统中的预防措施

    公开(公告)号:US20110061053A1

    公开(公告)日:2011-03-10

    申请号:US12098868

    申请日:2008-04-07

    CPC classification number: G06F9/5022 G06F9/5077

    Abstract: This present invention provides a portable user space application release/reacquire of adapter resources for a given job on a node using information in a network resource table. The information in the network resource table is obtained when a user space application is loaded by some resource manager. The present invention provides a portable solution that will work for any interconnect where adapter resources need to be freed and reacquired without having to write a specific function in the device driver. In the present invention, the preemption request is done on a job basis using a key or “job key” that was previously loaded when the user space application or job originally requested the adapter resources. This is done for each OS instance where the job is run.

    Abstract translation: 本发明提供了使用网络资源表中的信息的便携式用户空间应用程序释放/重新获取节点上的给定作业的适配器资源。 当某个资源管理器加载用户空间应用程序时,获取网络资源表中的信息。 本发明提供一种便携式解决方案,其可适用于任何互连,其中适配器资源需要被释放并重新获取,而不必在设备驱动器中写入特定功能。 在本发明中,使用当用户空间应用或作业最初请求适配器资源时先前加载的密钥或“作业密钥”,在作业基础上完成抢占请求。 这是为运行作业的每个操作系统实例完成的。

    Mechanism to provide reliability through packet drop detection
    7.
    发明授权
    Mechanism to provide reliability through packet drop detection 失效
    通过分组丢包检测提供可靠性的机制

    公开(公告)号:US07877436B2

    公开(公告)日:2011-01-25

    申请号:US12024600

    申请日:2008-02-01

    CPC classification number: G06F9/544

    Abstract: A method and a data processing system for completing checkpoint processing of a distributed job with local tasks communicating with other remote tasks via a host fabric interface (HFI) and assigned HFI window. Each HFI window has a send count and a receive count, which tracks GSM messages that are sent from and received at the HFI window. When a checkpoint is initiated by a master task, each local task forwards the send count and the receive count to the master task. The master task sums the respective counts and then compares the totals to each other. When the send count total is equal to the receive count total, the tasks are permitted to continue processing. However, when the send count total is not equal to the receive count total, the master task notifies each task of the job to rollback to a previous checkpoint or kill the job execution.

    Abstract translation: 一种方法和数据处理系统,用于通过主机结构接口(HFI)和分配的HFI窗口完成与其他远程任务通信的本地任务的分布式作业的检查点处理。 每个HFI窗口都有发送计数和接收计数,用于跟踪在HFI窗口发送和接收的GSM消息。 当主任务启动检查点时,每个本地任务将发送计数和接收计数转发给主任务。 主任务对各个计数进行相加,然后将总计相互比较。 当发送计数总数等于接收计数总数时,允许任务继续处理。 但是,当发送计数总数不等于接收计数总数时,主任务会通知作业的每个任务以回滚到先前的检查点或终止作业执行。

    Mechanism to perform debugging of global shared memory (GSM) operations
    8.
    发明授权
    Mechanism to perform debugging of global shared memory (GSM) operations 失效
    执行全局共享内存(GSM)操作调试的机制

    公开(公告)号:US07873879B2

    公开(公告)日:2011-01-18

    申请号:US12024585

    申请日:2008-02-01

    CPC classification number: G06F13/385

    Abstract: A host fabric interface (HFI) enables debugging of global shared memory (GSM) operations received at a local node from a network fabric. The local node has a memory management unit (MMU), which provides an effective address to real address (EA-to-RA) translation table that is utilized by the HFI to evaluate when EAs of GSM operations/data from a received GSM packet is memory-mapped to RAs of the local memory. The HFI retrieves the EA associated with a GSM operation/data within a received GSM packet. The HFI forwards the EA to the MMU, which determines when the EA is mapped to RAs within the local memory for the local task. The HFI processing logic enables processing of the GSM packet only when the EA of the GSM operation/data within the GSM packet is an EA that has a local RA translation. Non-matching EAs result in an error condition that requires debugging.

    Abstract translation: 主机结构接口(HFI)可以调试从网络结构在本地节点接收到的全局共享存储器(GSM)操作。 本地节点具有存储器管理单元(MMU),该存储器管理单元(MMU)为HFI用于实际地址(EA-to-RA)转换表提供有效地址,以评估来自接收到的GSM分组的GSM操作/数据的EAs是否为 内存映射到本地内存的RA。 HFI检索与接收的GSM分组内的GSM操作/数据相关联的EA。 HFI将EA转发到MMU,该MMU确定EA何时映射到本地内存中的本地任务的RA。 HFI处理逻辑仅当GSM操作的EA / GSM分组内的数据是具有本地RA转换的EA时才能处理GSM分组。 不匹配的EA会导致需要调试的错误条件。

    System and method for movement of non-aligned data in network buffer model
    9.
    发明授权
    System and method for movement of non-aligned data in network buffer model 有权
    网络缓冲模型中非对齐数据移动的系统和方法

    公开(公告)号:US07840643B2

    公开(公告)日:2010-11-23

    申请号:US10959801

    申请日:2004-10-06

    CPC classification number: H04L47/10

    Abstract: A method is provided for transferring data between first and second nodes of a network. Such method includes requesting first data to be transferred by a first upper layer protocol (ULP) operating on the first node of the network; and buffering second data for transfer to the second node by a lower protocol layer lower than the first ULP, the second data including an integral number of standard size units of data including the first data. The method further includes posting the second data to the network for delivery to the second node; receiving the second data at the second node; and from the received data, delivering the first data to a second ULP operating on the second node. The method is of particular application when transferring the data in unit size is faster than transferring the data in other than unit size.

    Abstract translation: 提供了一种用于在网络的第一和第二节点之间传送数据的方法。 这种方法包括通过在网络的第一节点上操作的第一上层协议(ULP)来请求第一数据传送; 以及缓冲第二数据以通过低于所述第一ULP的较低协议层传送到所述第二节点,所述第二数据包括包括所述第一数据的标准大小数据单元的整数。 该方法还包括将第二数据发布到网络以传送到第二节点; 在第二节点处接收第二数据; 并且从所接收的数据中,将第一数据传送到在第二节点上操作的第二ULP。 当传输单位大小的数据比传送除单位大小之外的数据更快时,该方法是特别应用的。

    System and method for intelligent software-controlled cache injection
    10.
    发明授权
    System and method for intelligent software-controlled cache injection 失效
    用于智能软件控制缓存注入的系统和方法

    公开(公告)号:US07774554B2

    公开(公告)日:2010-08-10

    申请号:US11676745

    申请日:2007-02-20

    CPC classification number: G06F12/0862 G06F12/0817

    Abstract: A system and method to provide injection of important data directly into a processor's cache location when that processor has previously indicated interest in the data. The memory subsystem at a target processor will determine if the memory address of data to be written to a memory location associated with the target processor is found in a processor cache of the target processor. If it is determined that the memory address is found in a target processor's cache, the data will be directly written to that cache at the same time that the data is being provided to a location in main memory.

    Abstract translation: 当处理器先前已经对数据感兴趣时,提供将重要数据直接注入处理器的高速缓存位置的系统和方法。 目标处理器上的存储器子系统将确定是否在目标处理器的处理器高速缓存中找到要写入到与目标处理器相关联的存储器位置的数据的存储器地址。 如果确定在目标处理器的高速缓存中找到存储器地址,则在将数据提供给主存储器中的位置的同时,数据将被直接写入该高速缓存。

Patent Agency Ranking