Method and apparatus for achieving higher frequencies of exactly rounded
results
    21.
    发明授权
    Method and apparatus for achieving higher frequencies of exactly rounded results 失效
    用于实现更高频率的精确圆整结果的方法和装置

    公开(公告)号:US6134574A

    公开(公告)日:2000-10-17

    申请号:US75073

    申请日:1998-05-08

    Abstract: A multiplier configured to obtain higher frequencies of exactly rounded results by adding an adjustment constant to intermediate products generated during iterative multiplication operations is disclosed. One such iterative multiplication operation is the Newton-Raphson iteration, which may be utilized by the multiplier to perform reciprocal calculations and reciprocal square root calculations. For each iteration, the results converge toward an infinitely precise result. To improve the frequency of the exactly rounded result, the results of the iterative calculations may be studied for a large number of differing input operands to determine the best suited value for the adjustment constant. The multiplier may also be configured to perform scalar and packed vector multiplication using the same hardware.

    Abstract translation: 公开了一种乘法器,其被配置为通过向迭代乘法运算中产生的中间乘积增加一个调整常数来获得更高频率的精确舍入结果。 一个这样的迭代乘法运算是牛顿 - 拉夫逊迭代,乘法运算可以用来进行相互计算和相互平方根计算。 对于每次迭代,结果趋向于无限精确的结果。 为了提高精确舍入结果的频率,可以针对大量不同的输入操作数来研究迭代计算的结果,以确定调整常数的最佳值。 乘法器还可以被配置为使用相同的硬件执行标量和压缩向量乘法。

    Microprocessor including an efficient implemention of an accumulate
instruction
    22.
    发明授权
    Microprocessor including an efficient implemention of an accumulate instruction 失效
    微处理器包括有效实现累加指令

    公开(公告)号:US5918062A

    公开(公告)日:1999-06-29

    申请号:US14507

    申请日:1998-01-28

    Abstract: An execution unit configured to perform a plurality of arithmetic operations using the same set of operands. These operands include corresponding input vector values in each of a plurality of input registers. The execution unit is coupled to receive these input vector values, as well as an instruction value indicative of one of the plurality of arithmetic operations. In one embodiment, the plurality of arithmetic operations includes a vectored add instruction, a vectored subtract instruction, a vectored reverse subtract instruction, and an accumulate instruction. The vectored instructions perform arithmetic operations concurrently using corresponding values from each of the plurality of input registers. The accumulate instruction, however, is executable to add together all input values within a single input register. The execution unit further includes a multiplexer unit configured to selectively route the input vector values to a plurality of adder units according to the opcode value. In an embodiment in which the execution unit is configured to perform subtraction operations as well as addition, the multiplexer unit is additionally configured to selectively route negated versions (either one's or two's complement format) to the plurality of adder units. Each of the plurality of adder units is configured to generate a sum based upon the values conveyed from the multiplexer unit. The accumulate instruction advantageously allows important operations such as the matrix multiply to be performed rapidly. Because the matrix multiply is an integral part of many applications (particularly graphics applications), the accumulate instruction may lead to increased overall system performance.

    Abstract translation: 执行单元,被配置为使用相同的一组操作数执行多个算术运算。 这些操作数在多个输入寄存器的每一个中包括相应的输入向量值。 执行单元被耦合以接收这些输入向量值,以及指示多个算术运算之一的指令值。 在一个实施例中,多个算术运算包括矢量加法指令,矢量减法指令,向量反向减法指令和累加指令。 矢量指令使用来自多个输入寄存器中的每一个的对应值同时执行算术运算。 然而,累加指令可执行,以将单个输入寄存器中的所有输入值相加。 执行单元还包括多路复用器单元,被配置为根据操作码值选择性地将输入矢量值路由到多个加法器单元。 在其中执行单元被配置为执行减法运算以及加法的实施例中,多路复用器单元另外配置成选择性地将否定版本(一者或二者的补码格式)路由到多个加法器单元。 多个加法器单元中的每一个被配置为基于从多路复用器单元传送的值产生和。 累加指令有利地允许快速执行诸如矩阵乘法的重要操作。 由于矩阵乘法是许多应用程序(特别是图形应用程序)的组成部分,累加指令可能会导致整体系统性能的提高。

    Efficient matrix multiplication on a parallel processing device
    23.
    发明授权
    Efficient matrix multiplication on a parallel processing device 有权
    在并行处理设备上有效的矩阵乘法

    公开(公告)号:US08589468B2

    公开(公告)日:2013-11-19

    申请号:US12875961

    申请日:2010-09-03

    CPC classification number: G06F17/16

    Abstract: The present invention enables efficient matrix multiplication operations on parallel processing devices. One embodiment is a method for mapping CTAs to result matrix tiles for matrix multiplication operations. Another embodiment is a second method for mapping CTAs to result tiles. Yet other embodiments are methods for mapping the individual threads of a CTA to the elements of a tile for result tile computations, source tile copy operations, and source tile copy and transpose operations. The present invention advantageously enables result matrix elements to be computed on a tile-by-tile basis using multiple CTAs executing concurrently on different streaming multiprocessors, enables source tiles to be copied to local memory to reduce the number accesses from the global memory when computing a result tile, and enables coalesced read operations from the global memory as well as write operations to the local memory without bank conflicts.

    Abstract translation: 本发明使得能够对并行处理装置进行有效的矩阵乘法运算。 一个实施例是用于将CTA映射到用于矩阵乘法运算的矩阵瓦片的方法。 另一个实施例是用于将CTA映射到结果瓦片的第二种方法。 其他实施例是用于将CTA的各个线程映射到块的元素以用于结果瓦片计算,源瓦片复制操作以及源瓦片复制和转置操作的方法。 本发明有利地使结果矩阵元素可以使用在不同的流式多处理器上同时执行的多个CTA来逐个瓦片地计算,使得能够将源瓦片复制到本地存储器,以减少当计算一个 结果图块,并且启用来自全局存储器的合并的读取操作以及对本地存储器的写入操作,而没有存储体冲突。

    Maximized memory throughput on parallel processing devices
    24.
    发明授权
    Maximized memory throughput on parallel processing devices 有权
    最大化并行处理设备的内存吞吐量

    公开(公告)号:US08327123B2

    公开(公告)日:2012-12-04

    申请号:US13069384

    申请日:2011-03-23

    CPC classification number: G06F9/3887 G06F9/3455 G06F9/3851 G06F9/3889

    Abstract: In parallel processing devices, for streaming computations, processing of each data element of the stream may not be computationally intensive and thus processing may take relatively small amounts of time to compute as compared to memory accesses times required to read the stream and write the results. Therefore, memory throughput often limits the performance of the streaming computation. Generally stated, provided are methods for achieving improved, optimized, or ultimately, maximized memory throughput in such memory-throughput-limited streaming computations. Streaming computation performance is maximized by improving the aggregate memory throughput across the plurality of processing elements and threads. High aggregate memory throughput is achieved by balancing processing loads between threads and groups of threads and a hardware memory interface coupled to the parallel processing devices.

    Abstract translation: 在用于流计算的并行处理装置中,流的每个数据元素的处理可能不是计算密集的,因此与读取流并写入结果所需的存储器访问时间相比,处理可能需要相对较少的时间来计算。 因此,内存吞吐量通常会限制流计算的性能。 一般来说,提供了用于在这种存储器吞吐量限制的流计算中实现改进的,优化的或最终最大化的存储器吞吐量的方法。 通过提高跨多个处理元件和线程的聚合内存吞吐量,最大化流计算性能。 通过平衡线程和线程组之间的处理负载以及耦合到并行处理设备的硬件存储器接口来实现高聚合内存吞吐量。

    MAXIMIZED MEMORY THROUGHPUT ON PARALLEL PROCESSING DEVICES
    25.
    发明申请
    MAXIMIZED MEMORY THROUGHPUT ON PARALLEL PROCESSING DEVICES 有权
    最大化的并行处理器件的存储器

    公开(公告)号:US20110173414A1

    公开(公告)日:2011-07-14

    申请号:US13069384

    申请日:2011-03-23

    CPC classification number: G06F9/3887 G06F9/3455 G06F9/3851 G06F9/3889

    Abstract: In parallel processing devices, for streaming computations, processing of each data element of the stream may not be computationally intensive and thus processing may take relatively small amounts of time to compute as compared to memory accesses times required to read the stream and write the results. Therefore, memory throughput often limits the performance of the streaming computation. Generally stated, provided are methods for achieving improved, optimized, or ultimately, maximized memory throughput in such memory-throughput-limited streaming computations. Streaming computation performance is maximized by improving the aggregate memory throughput across the plurality of processing elements and threads. High aggregate memory throughput is achieved by balancing processing loads between threads and groups of threads and a hardware memory interface coupled to the parallel processing devices.

    Abstract translation: 在用于流计算的并行处理装置中,流的每个数据元素的处理可能不是计算密集的,因此与读取流并写入结果所需的存储器访问时间相比,处理可能需要相对较少的时间来计算。 因此,内存吞吐量通常会限制流计算的性能。 一般来说,提供了用于在这种存储器吞吐量限制的流计算中实现改进的,优化的或最终最大化的存储器吞吐量的方法。 通过提高跨多个处理元件和线程的聚合内存吞吐量,最大化流计算性能。 通过平衡线程和线程组之间的处理负载以及耦合到并行处理设备的硬件存储器接口来实现高聚合内存吞吐量。

    Apparatus and method for superforwarding load operands in a microprocessor
    26.
    发明授权
    Apparatus and method for superforwarding load operands in a microprocessor 有权
    用于在微处理器中超载负载操作数的装置和方法

    公开(公告)号:US06442677B1

    公开(公告)日:2002-08-27

    申请号:US09329497

    申请日:1999-06-10

    CPC classification number: G06F9/30043 G06F9/3826

    Abstract: An apparatus and method for superforwarding load operands in a microprocessor are provided. An execution unit in a microprocessor is configured to receive a load instruction and a subsequent instruction. If the load instruction corresponds to a simple load instruction, a destination operand of the load instruction can be superforwarded to a subsequent instruction if the subsequent instruction specifies a source operand that depends on the destination operand of the load instruction. The subsequent instruction is not required to wait until a load instruction executes or completes and can be scheduled and/or executed prior to or at the same time as the load instruction. Consequently, latencies associated with operand dependencies may be reduced.

    Abstract translation: 提供了一种用于在微处理器中超载负载操作数的装置和方法。 微处理器中的执行单元被配置为接收加载指令和后续指令。 如果加载指令对应于简单的加载指令,则如果后续指令指定依赖于加载指令的目的地操作数的源操作数,则加载指令的目标操作数可以被超前给后续指令。 后续指令不需要等待加载指令执行或完成,并且可以在加载指令之前或同时进行调度和/或执行。 因此,可以减少与操作数相关性相关联的延迟。

    Method and apparatus for rapid execution of FCOM and FSTSW
    27.
    发明授权
    Method and apparatus for rapid execution of FCOM and FSTSW 有权
    用于快速执行FCOM和FSTSW的方法和装置

    公开(公告)号:US06425074B1

    公开(公告)日:2002-07-23

    申请号:US09393524

    申请日:1999-09-10

    Abstract: A microprocessor configured to rapidly execute floating point store status word (FSTSW) type instructions that are immediately preceded by floating point compare (FCOM) type instructions is disclosed. FCOM-type instructions are modified to store their results to an architectural floating point status word and a temporary destination register. If an FSTSW-type instruction is detected immediately following an FCOM-type instruction, then the FSTSW-type instruction is transformed into a special fast floating point store status word (FSTSWEF) instruction. Unlike the FSTSW-type instruction, which is serializing and negatively impacts performance, the FSTSWEF instruction is not serializing and allows execution to continue without undue serialization. A computer system and method for rapidly executing FSTSW instructions immediately preceded by FCOM-type instructions are also disclosed.

    Abstract translation: 公开了一种被配置为快速执行浮点比较(FCOM)类型指令之前的浮点存储状态字(FSTSW)类型指令的微处理器。 修改FCOM类型的指令以将其结果存储到架构浮点状态字和临时目标寄存器。 如果在FCOM型指令之后立即检测到FSTSW型指令,则FSTSW型指令被转换为特殊的快速浮点存储状态字(FSTSWEF)指令。 与串行化和负面影响性能的FSTSW型指令不同,FSTSWEF指令不是序列化的,允许执行继续,而不会过多的序列化。 还公开了一种用于在紧接在FCOM型指令之前快速执行FSTSW指令的计算机系统和方法。

    Apparatus and method for executing floating-point store instructions in a microprocessor
    28.
    发明授权
    Apparatus and method for executing floating-point store instructions in a microprocessor 失效
    在微处理器中执行浮点存储指令的装置和方法

    公开(公告)号:US06408379B1

    公开(公告)日:2002-06-18

    申请号:US09329718

    申请日:1999-06-10

    Abstract: An apparatus and method for executing floating-point store instructions in a microprocessor is provided. If store data of a floating-point store instruction corresponds to a tiny number and an underflow exception is masked, then a trap routine can be executed to generate corrected store data and complete the store operation. In response to detecting that store data corresponds to a tiny number and the underflow exception is masked, the store data, store address information, and opcode information can be stored prior to initiating the trap routine. The trap routine can be configured to access the store data, store address information, and opcode information. The trap routine can be configured to generate corrected store data and complete the store operation using the store data, store address information, and opcode information.

    Abstract translation: 提供了一种用于在微处理器中执行浮点存储指令的装置和方法。 如果浮点存储指令的存储数据对应于微数,并且下溢异常被屏蔽,则可以执行陷阱例程以生成校正的存储数据并完成存储操作。 响应于检测到存储数据对应于微小数字并且下溢异常被屏蔽,可以在启动陷阱例程之前存储存储数据,存储地址信息和操作码信息。 陷阱程序可以配置为访问存储数据,存储地址信息和操作码信息。 陷阱程序可以配置为生成更正的存储数据,并使用存储数据,存储地址信息和操作码信息完成存储操作。

    Multi-function bipartite look-up table
    29.
    发明授权
    Multi-function bipartite look-up table 失效
    多功能二分查询表

    公开(公告)号:US06256653B1

    公开(公告)日:2001-07-03

    申请号:US09015084

    申请日:1998-01-29

    Abstract: A multi-function look-up table for determining output values for predetermined ranges of a first mathematical function and a second mathematical function. In one embodiment, the multi-function look-up table is a bipartite look-up table including a first plurality of storage locations and a second plurality of storage locations. The first plurality of storage locations store base values for the first and second mathematical functions. Each base value is an output value (for either the first or second function) corresponding to an input region which includes the look-up table input value. The second plurality of storage locations, on the other hand, store difference values for both the first and second mathematical functions. These difference values are used for linear interpolation in conjunction with a corresponding base value in order to generate a look-up table output value. The multi-function look-up table further includes an address control unit coupled to receive a first input value and a signal which indicates whether an output value is to be generated for the first or second mathematical function. The address control unit then generates a first address value from these signals which is in turn conveyed to the first and second plurality of storage locations. In response to receiving the first address value, the first and second plurality of storage locations are configured to output a first base value and a first difference value, respectively. The first base value and first difference value are then conveyed to an output unit configured to generate a look-up table output value from the two values.

    Abstract translation: 一种用于确定第一数学函数和第二数学函数的预定范围的输出值的多功能查找表。 在一个实施例中,多功能查找表是包括第一多个存储位置和第二多个存储位置的二分查找表。 第一多个存储位置存储第一和第二数学函数的基值。 每个基值是对应于包括查找表输入值的输入区域的输出值(对于第一或第二函数)。 另一方面,第二多个存储位置存储第一和第二数学函数的差值。 这些差值用于与对应的基值相结合的线性插值,以产生查询表输出值。 多功能查找表还包括地址控制单元,其被耦合以接收第一输入值和指示是否为第一或第二数学函数生成输出值的信号。 地址控制单元然后从这些信号产生一个第一地址值,该第一地址值又被传送到第一和第二多个存储位置。 响应于接收到第一地址值,第一和第二多个存储位置被配置为分别输出第一基值和第一差值。 然后将第一基值和第一差分值传送到被配置为从两个值生成查找表输出值的输出单元。

    Method and apparatus for multi-function arithmetic

    公开(公告)号:US06223198B1

    公开(公告)日:2001-04-24

    申请号:US09134171

    申请日:1998-08-14

    Abstract: A multiplier capable of performing signed and unsigned scalar and vector multiplication is disclosed. The multiplier is configured to receive signed or unsigned multiplier and multiplicand operands in scalar or packed vector form. An effective sign for the multiplier and multiplicand operands may be calculated and used to create and select a number of partial products according to Booth's algorithm. Once the partial products have been created and selected, they may be summed and the results may be output. The results may be signed or unsigned, and may represent vector or scalar quantities. When a vector multiplication is performed, the multiplier may be configured to generate and select partial products so as to effectively isolate the multiplication process for each pair of vector components. The multiplier may also be configured to sum the products of the vector components to form the vector dot product. The final product may be output in segments so as to require fewer bus lines. The segments may be rounded by adding a rounding constant. Rounding and normalization may be performed in two paths, one assuming an overflow will occur, the other assuming no overflow will occur. The multiplier may also be configured to perform iterative calculations to evaluate constant powers of an operand. Intermediate products that are formed may be rounded and normalized in two paths and then compressed and stored for use in the next iteration. An adjustment constant may also be added to increase the frequency of exactly rounded results.

Patent Agency Ranking