DIRECT COMMUNICATION BETWEEN GPU AND FPGA COMPONENTS
    11.
    发明申请
    DIRECT COMMUNICATION BETWEEN GPU AND FPGA COMPONENTS 有权
    GPU与FPGA组件之间的直接通信

    公开(公告)号:US20140055467A1

    公开(公告)日:2014-02-27

    申请号:US13593129

    申请日:2012-08-23

    Abstract: A system may include a Graphics Processing Unit (GPU) and a Field Programmable Gate Array (FPGA). The system may further include a bus interface that is external to the FPGA, and that is configured to transfer data directly between the GPU and the FPGA without storing the data in a memory of a central processing unit (CPU) as an intermediary operation.

    Abstract translation: 系统可以包括图形处理单元(GPU)和现场可编程门阵列(FPGA)。 该系统还可以包括在FPGA外部的总线接口,并且被配置为直接在GPU和FPGA之间传送数据,而不将数据存储在中央处理单元(CPU)的存储器中作为中介操作。

    Content addressable memory architecture
    12.
    发明申请
    Content addressable memory architecture 有权
    内容可寻址内存架构

    公开(公告)号:US20070011436A1

    公开(公告)日:2007-01-11

    申请号:US11143060

    申请日:2005-06-01

    Applicant: Ray Bittner

    Inventor: Ray Bittner

    CPC classification number: G06F12/0895

    Abstract: A content addressable memory (CAM) architecture comprises two components, a small, fast on-chip cache memory that stores data that is likely needed in the immediate future, and an off-chip main memory in normal RAM. The CAM allows data to be stored with an associated tag that is of any size and identifies the data. Via tags, waves of data are launched into a machine's computational hardware and re-associated with related tags upon return. Tags may be generated so that related data values have adjacent storage locations, facilitating fast retrieval. Typically, the CAM emits only complete operand sets. By using tags to identify unique operand sets, computations can be allowed to proceed out of order, and be recollected later for further processing. This allows greater computational speed via multiple parallel processing units that compute large sets of operand sets, or by opportunistically fetching and executing operand sets as they become available.

    Abstract translation: 内容可寻址存储器(CAM)架构包括两个组件,即存储在不久的将来可能需要的数据的小型,快速的片上高速缓存存储器和正常RAM中的片外主存储器。 CAM允许使用任何大小的关联标签存储数据并识别数据。 通过标签,数据波发射到机器的计算硬件中,并在返回时与相关标签重新关联。 可以生成标签,使得相关数据值具有相邻的存储位置,便于快速检索。 通常,CAM仅发出完整的操作数集。 通过使用标签来识别唯一的操作数集合,可以允许计算继续执行,并且稍后重新进行进一步处理。 这允许通过计算大组操作数集合的多个并行处理单元或通过在它们变得可用时机会地获取和执行操作数集合来实现更大的计算速度。

    Conditional execution via content addressable memory and parallel computing execution model
    13.
    发明申请
    Conditional execution via content addressable memory and parallel computing execution model 有权
    通过内容可寻址内存和并行计算执行模式进行条件执行

    公开(公告)号:US20060277392A1

    公开(公告)日:2006-12-07

    申请号:US11143308

    申请日:2005-06-01

    Applicant: Ray Bittner

    Inventor: Ray Bittner

    CPC classification number: G06F9/3885 G06F9/30058 G06F9/325 G06F9/4494

    Abstract: The use of a configuration-based execution model in conjunction with a content addressable memory (CAM) architecture provides a mechanism that enables performance of a number of computing concepts, including conditional execution, (e.g., If-Then statements and while loops), function calls and recursion. If-then and while loops are implemented by using a CAM feature that emits only complete operand sets from the CAM for processing; different seed operands are generated for different conditional evaluation results, and that seed operand is matched with computed data to for an if-then branch or upon exiting a while loop. As a result, downstream operators retrieve only completed operands. Function calls and recursion are handled by using a return tag as an operand along with function parameter data into the input tag space of a function. A recursive function is split into two halves, a pre-recursive half and a post-recursive half that executes after pre-recursive calls.

    Abstract translation: 结合内容可寻址存储器(CAM)架构使用基于配置的执行模型提供了一种机制,其能够执行包括条件执行(例如,If-Then语句和while循环)的多个计算概念的功能,功能 调用和递归。 通过使用仅从CAM处理完整的操作数集合的CAM功能来实现if-then和while循环以进行处理; 为不同的条件评估结果生成不同的种子操作数,并且将种子操作数与计算数据匹配,以供if-then分支或退出while循环。 因此,下游运营商仅检索完成的操作数。 通过使用返回标记作为操作数以及函数参数数据到函数的输入标签空间来处理函数调用和递归。 递归函数分为两部分,一个预递归半部分和一个在递归递归调用之后执行的递归递归后半部分。

Patent Agency Ranking