Software accessible fast VA to PA translation
    1.
    发明授权
    Software accessible fast VA to PA translation 有权
    软件可访问快速VA到PA翻译

    公开(公告)号:US07350053B1

    公开(公告)日:2008-03-25

    申请号:US11034345

    申请日:2005-01-11

    IPC分类号: G06F9/26 G06F9/34 G06F12/00

    CPC分类号: G06F12/1081 G06F12/1027

    摘要: A method to communicate data is disclosed which includes communicating a virtual address to a translation lookaside buffer (TLB) and translating the virtual address to a physical address of a computer memory. The method also includes loading the physical address translated by the TLB into a register within a processor and transmitting the data from the physical address to a destination computing device.

    摘要翻译: 公开了一种用于传送数据的方法,其包括将虚拟地址传送到翻译后备缓冲器(TLB)并将虚拟地址转换为计算机存储器的物理地址。 该方法还包括将由TLB转换的物理地址加载到处理器内的寄存器中,并将数据从物理地址传输到目标计算设备。

    Optimizing hardware TLB reload performance in a highly-threaded processor with multiple page sizes
    2.
    发明授权
    Optimizing hardware TLB reload performance in a highly-threaded processor with multiple page sizes 有权
    在具有多页尺寸的高线程处理器中优化硬件TLB重新载入性能

    公开(公告)号:US07543132B1

    公开(公告)日:2009-06-02

    申请号:US10880985

    申请日:2004-06-30

    IPC分类号: G06F12/10

    摘要: A method and apparatus for improved performance for reloading translation look-aside buffers in multithreading, multi-core processors. TSB prediction is accomplished by hashing a plurality of data parameters and generating an index that is provided as an input to a predictor array to predict the TSB page size. In one embodiment of the invention, the predictor array comprises two-bit saturating up-down counters that are used to enhance the accuracy of the TSB prediction. The saturating up-down counters are configured to avoid making rapid changes in the TSB prediction upon detection of an error. Multiple misses occur before the prediction output is changed. The page size specified by the predictor index is searched first. Using the technique described herein, errors are minimized because the counter leads to the correct result at least half the time.

    摘要翻译: 一种用于在多线程,多核处理器中重新加载翻译后备缓冲器的性能的方法和装置。 通过散列多个数据参数并生成作为预测器阵列的输入提供的索引来预测TSB页面大小来实现TSB预测。 在本发明的一个实施例中,预测器阵列包括用于增强TSB预测精度的二位饱和上拉计数器。 饱和上拉计数器配置为避免在检测到错误时对TSB预测进行快速更改。 在预测输出改变之前会发生多重错误。 首先搜索由预测变量索引指定的页面大小。 使用本文描述的技术,误差被最小化,因为计数器至少在一半的时间内导致正确的结果。

    Apparatus and method for floating-point exception prediction and recovery
    3.
    发明授权
    Apparatus and method for floating-point exception prediction and recovery 有权
    浮点异常预测和恢复的装置和方法

    公开(公告)号:US07373489B1

    公开(公告)日:2008-05-13

    申请号:US10880713

    申请日:2004-06-30

    IPC分类号: G06F9/00 G06F7/38 G06F9/44

    摘要: An apparatus and method for floating point exception prediction and recovery. In one embodiment, a processor may include instruction fetch logic configured to issue a first instruction from one of a plurality of threads and to successively issue a second instruction from another one of the plurality of threads. The processor may also include floating-point arithmetic logic configured to execute a floating-point instruction issued by the instruction fetch logic from a given one of the plurality of threads, and further configured to determine whether the floating-point instruction generates an exception, and may further include exception prediction logic configured to predict whether the floating-point instruction will generate the exception, where the prediction occurs before the floating-point arithmetic logic determines whether the floating-point instruction generates the exception.

    摘要翻译: 一种用于浮点异常预测和恢复的装置和方法。 在一个实施例中,处理器可以包括指令提取逻辑,其被配置为从多个线程中的一个发出第一指令,并且从多个线程中的另一个线程连续地发出第二指令。 处理器还可以包括浮点算术逻辑,其被配置为执行由指令提取逻辑从多个线程中的给定一个发出的浮点指令,并且还被配置为确定浮点指令是否生成异常,以及 还可以包括被配置为预测浮点指令是否将产生异常的异常预测逻辑,其中在浮点运算逻辑确定浮点指令是否产生异常之前发生预测。

    METHOD AND SYSTEM FOR OFFLOADING COMPUTATION FLEXIBLY TO A COMMUNICATION ADAPTER
    4.
    发明申请
    METHOD AND SYSTEM FOR OFFLOADING COMPUTATION FLEXIBLY TO A COMMUNICATION ADAPTER 有权
    将通信适配器灵活运算的方法和系统

    公开(公告)号:US20130007181A1

    公开(公告)日:2013-01-03

    申请号:US13173473

    申请日:2011-06-30

    IPC分类号: G06F15/167

    CPC分类号: G06F9/5027 G06F2209/509

    摘要: A method for offloading computation flexibly to a communication adapter includes receiving a message that includes a procedure image identifier associated with a procedure image of a host application, determining a procedure image and a communication adapter processor using the procedure image identifier, and forwarding the first message to the communication adapter processor configured to execute the procedure image. The method further includes executing, on the communication adapter processor independent of a host processor, the procedure image in communication adapter memory by acquiring a host memory latch for a memory block in host memory, reading the memory block in the host memory after acquiring the host memory latch, manipulating, by executing the procedure image, the memory block in the communication adapter memory to obtain a modified memory block, committing the modified memory block to the host memory, and releasing the host memory latch.

    摘要翻译: 一种用于将计算灵活地卸载到通信适配器的方法包括接收包括与主机应用程序的过程映像相关联的过程映像标识符的消息,使用过程映像标识符确定过程映像和通信适配器处理器,以及转发第一消息 配置为执行过程映像的通信适配器处理器。 该方法还包括通过获取主机存储器中的存储器块的主机存储器锁存器来在独立于主处理器的通信适配器处理器上执行通信适配器存储器中的过程映像,在获取主机之后读取主机存储器中的存储器块 存储器锁存器,通过执行过程映像来操纵通信适配器存储器中的存储块,以获得修改的存储器块,将修改的存储器块提交到主机存储器,以及释放主机存储器锁存器。

    Scalable Interface for Connecting Multiple Computer Systems Which Performs Parallel MPI Header Matching
    5.
    发明申请
    Scalable Interface for Connecting Multiple Computer Systems Which Performs Parallel MPI Header Matching 有权
    用于连接执行并行MPI头匹配的多个计算机系统的可扩展接口

    公开(公告)号:US20120243542A1

    公开(公告)日:2012-09-27

    申请号:US13489496

    申请日:2012-06-06

    IPC分类号: H04L12/56

    CPC分类号: G06F15/17337

    摘要: An interface device for a compute node in a computer cluster which performs Message Passing Interface (MPI) header matching using parallel matching units. The interface device comprises a memory that stores posted receive queues and unexpected queues. The posted receive queues store receive requests from a process executing on the compute node. The unexpected queues store headers of send requests (e.g., from other compute nodes) that do not have a matching receive request in the posted receive queues. The interface device also comprises a plurality of hardware pipelined matcher units. The matcher units perform header matching to determine if a header in the send request matches any headers in any of the plurality of posted receive queues. Matcher units perform the header matching in parallel. In other words, the plural matching units are configured to search the memory concurrently to perform header matching.

    摘要翻译: 用于计算机集群中的计算节点的接口设备,其使用并行匹配单元执行消息传递接口(MPI)报头匹配。 接口设备包括存储发布的接收队列和意外队列的存储器。 发布的接收队列存储在计算节点上执行的进程的接收请求。 意外队列存储在发布的接收队列中不具有匹配的接收请求的发送请求(例如来自其他计算节点)的头部。 接口设备还包括多个硬件流水线匹配器单元。 匹配器单元执行报头匹配以确定发送请求中的报头是否匹配多个发布的接收队列中的任何一个中的任何报头。 匹配器单元并行执行头匹配。 换句话说,多个匹配单元被配置为同时搜索​​存储器以执行头匹配。

    Caching data in a cluster computing system which avoids false-sharing conflicts
    6.
    发明授权
    Caching data in a cluster computing system which avoids false-sharing conflicts 有权
    在集群计算系统中缓存数据,避免虚假共享冲突

    公开(公告)号:US08095617B2

    公开(公告)日:2012-01-10

    申请号:US12495635

    申请日:2009-06-30

    IPC分类号: G06F15/16

    CPC分类号: G06F12/0817 G06F12/0813

    摘要: Managing operations in a first compute node of a multi-computer system. A remote write may be received to a first address of a remote compute node. A first data structure entry may be created in a data structure, which may include the first address and status information indicating that the remote write has been received. Upon determining that the local cache of the first compute node has been updated with the remote write, the remote write may be issued to the remote compute node. Accordingly, the first data structure entry may be released upon completion of the remote write.

    摘要翻译: 在多计算机系统的第一个计算节点中管理操作。 远程写入可以被接收到远程计算节点的第一地址。 可以在数据结构中创建第一数据结构条目,数据结构可以包括指示已经接收到远程写入的第一地址和状态信息。 在确定使用远程写入更新了第一计算节点的本地高速缓存之后,可以向远程计算节点发出远程写入。 因此,可以在完成远程写入时释放第一数据结构条目。

    Software Aware Throttle Based Flow Control
    7.
    发明申请
    Software Aware Throttle Based Flow Control 有权
    软件感知基于节气门的流量控制

    公开(公告)号:US20100332676A1

    公开(公告)日:2010-12-30

    申请号:US12495452

    申请日:2009-06-30

    IPC分类号: G06F15/16

    摘要: A system, comprising a compute node and coupled network adapter (NA), that supports improved data transfer request buffering and a more efficient method of determining the completion status of data transfer requests. Transfer requests received by the NA are stored in a first buffer then transmitted on a network interface. When significant network delays are detected and the first buffer is full, the NA sets a flag to stop software issuing transfer requests. Compliant software checks this flag before sending requests and does not issue further requests. A second NA buffer stores additional received transfer requests that were perhaps in-transit. When conditions improve the flag is cleared and the first buffer used again. Completion status is efficiently determined by grouping network transfer requests. The NA counts received requests and completed network requests for each group. Software determines if a group of requests is complete by reading a count value.

    摘要翻译: 一种包括计算节点和耦合网络适配器(NA)的系统,其支持改进的数据传输请求缓冲以及确定数据传输请求的完成状态的更有效的方法。 由NA接收的传送请求存储在第一缓冲器中,然后在网络接口上发送。 当检测到显着的网络延迟并且第一个缓冲区已满时,NA设置一个标志,以停止发布传输请求的软件。 合规软件在发送请求之前检查此标志,并且不会发出进一步的请求。 第二个NA缓冲存储器可以存储可能在运输过程中的其他接收的传输请求。 当条件改善时,标志被清除,第一个缓冲区再次使用。 通过分组网络传输请求有效地确定完成状态。 NA计数接收到的请求并为每个组完成网络请求。 软件通过读取计数值来确定一组请求是否完成。

    Distributed vector architecture
    8.
    发明授权
    Distributed vector architecture 失效
    分布式矢量架构

    公开(公告)号:US5946496A

    公开(公告)日:1999-08-31

    申请号:US988524

    申请日:1997-12-10

    IPC分类号: G06F15/78 G06F15/76

    CPC分类号: G06F15/8061

    摘要: A vector/scalar computer system has nodes interconnected by an interconnect network. Each node includes a vector execution unit, a scalar execution unit, physical vector registers holding physical vector elements, a mapping vector register holding a mapping vector, and a memory. The physical vector registers from nodes together form an architectural vector register having architectural vector elements. The mapping vector defines an assignment of architectural vector elements to physical vector elements for its node. The memories from the nodes together form an aggregate memory.

    摘要翻译: 矢量/标量计算机系统具有通过互连网络互连的节点。 每个节点包括矢量执行单元,标量执行单元,保持物理矢量元素的物理矢量寄存器,保持映射矢量的映射矢量寄存器和存储器。 来自节点的物理矢量寄存器形成具有架构矢量元素的架构向量寄存器。 映射向量定义了结构矢量元素对其节点的物理矢量元素的分配。 来自节点的记忆一起形成聚合记忆。

    Performing high granularity prefetch from remote memory into a cache on a device without change in address
    9.
    发明授权
    Performing high granularity prefetch from remote memory into a cache on a device without change in address 有权
    从远程内存执行高粒度预取到设备上的缓存,而不改变地址

    公开(公告)号:US08549231B2

    公开(公告)日:2013-10-01

    申请号:US12684689

    申请日:2010-01-08

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0862 G06F12/1081

    摘要: Provided is a method, which may be performed on a computer, for prefetching data over an interface. The method may include receiving a first data prefetch request for first data of a first data size stored at a first physical address corresponding to a first virtual address. The first data prefetch request may include second data specifying the first virtual address and third data specifying the first data size. The first virtual address and the first data size may define a first virtual address range. The method may also include converting the first data prefetch request into a first data retrieval request. To convert the first data prefetch request into a first data retrieval request the first virtual address specified by the second data may be translated into the first physical address. The method may further include issuing the first data retrieval request at the interface, receiving the first data at the interface and storing at least a portion of the received first data in a cache. Storing may include setting each of one or more cache tags associated with the at least a portion of the received first data to correspond to the first physical address.

    摘要翻译: 提供了一种可以在计算机上执行以用于通过接口预取数据的方法。 该方法可以包括:接收对与第一虚拟地址相对应的第一物理地址处存储的第一数据大小的第一数据的第一数据预取请求。 第一数据预取请求可以包括指定第一虚拟地址的第二数据和指定第一数据大小的第三数据。 第一虚拟地址和第一数据大小可以定义第一虚拟地址范围。 该方法还可以包括将第一数据预取请求转换为第一数据检索请求。 为了将第一数据预取请求转换为第一数据检索请求,由第二数据指定的第一虚拟地址可以被转换为第一物理地址。 该方法还可以包括在接口处发布第一数据检索请求,在接口处接收第一数据并将所接收的第一数据的至少一部分存储在高速缓存中。 存储可以包括将与接收到的第一数据的至少一部分相关联的一个或多个缓存标签中的每一个设置为对应于第一物理地址。

    Network use of virtual addresses without pinning or registration
    10.
    发明授权
    Network use of virtual addresses without pinning or registration 有权
    网络使用虚拟地址,无需固定或注册

    公开(公告)号:US08234407B2

    公开(公告)日:2012-07-31

    申请号:US12495805

    申请日:2009-06-30

    IPC分类号: G06F15/16 G06F13/36 G06F12/00

    CPC分类号: G06F12/1027 G06F12/1081

    摘要: A system comprising a compute node and coupled network adapter (NA) that allows the NA to directly use CPU virtual addresses without pinning pages in system memory. The NA performs memory accesses in response to requests from various sources. Each request source is assigned to context. Each context has a descriptor that controls the address translation performed by the NA. When the CPU wants to update translation information it sends a synchronization request to the NA that causes the NA to stop fetching a category of requests associated with the information update. The category may be requests associated with a context or a page address. Once the NA determines that all the fetched requests in the category have completed it notifies the CPU and the CPU performs the information update. Once the update is complete, the CPU clears the synchronization request and the NA starts fetching requests in the category.

    摘要翻译: 一种包括计算节点和耦合网络适配器(NA)的系统,其允许NA直接使用CPU虚拟地址而不在系统存储器中固定页面。 NA响应来自各种来源的请求,执行存储器访问。 每个请求源被分配给上下文。 每个上下文都有一个描述符,用于控制由NA执行的地址转换。 当CPU要更新翻译信息时,它向NA发送同步请求,导致NA停止获取与信息更新相关联的一类请求。 类别可以是与上下文或页面地址相关联的请求。 一旦NA确定类别中的所有获取的请求已经完成,它通知CPU并且CPU执行信息更新。 更新完成后,CPU将清除同步请求,NA将开始获取该类别中的请求。