Processor having a tripwire bus port and executing a tripwire instruction
    121.
    发明授权
    Processor having a tripwire bus port and executing a tripwire instruction 有权
    具有tripwire总线端口并执行tripwire指令的处理器

    公开(公告)号:US09489202B2

    公开(公告)日:2016-11-08

    申请号:US14311212

    申请日:2014-06-20

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor has a special tripwire bus port and executes a novel tripwire instruction. Execution of the tripwire instruction causes the processor to output a tripwire value onto the port during a clock cycle when the tripwire instruction is being executed. A first multi-bit value of the tripwire value is data that is output from registers, and/or flags, and/or pointers, and/or data values stored in the pipeline. A field of the tripwire instruction specifies what particular stored values will be output as the first multi-bit value. A second multi-bit value of the tripwire value is a number that identifies the particular processor that output the tripwire value. The processor has a TE enable/disable control bit. This bit is programmable by a special instruction to disable all tripwire instructions. If disabled, a tripwire instruction is fetched and decoded but does not cause the output of a tripwire value.

    Abstract translation: 流水线运行到完成处理器具有特殊的tripwire总线端口,并执行新颖的tripwire指令。 在执行tripwire指令时,执行tripwire指令会导致处理器在时钟周期内向端口输出绊线值。 tripwire值的第一多位值是从存储在流水线中的寄存器,/或标志,/或指针和/或数据值输出的数据。 tripwire指令的字段指定将作为第一个多位值输出什么特定的存储值。 tripwire值的第二个多位值是识别输出绊线值的特定处理器的数字。 处理器具有TE使能/禁止控制位。 该位可通过特殊指令进行编程,以禁用所有tripwire指令。 如果禁用,则取出并解码tripwire指令,但不会导致tripwire值的输出。

    Picoengine multi-processor with power control management
    122.
    发明授权
    Picoengine multi-processor with power control management 有权
    Picoengine多处理器具有电源控制管理功能

    公开(公告)号:US09483439B2

    公开(公告)日:2016-11-01

    申请号:US14251599

    申请日:2014-04-12

    Inventor: Gavin J. Stark

    Abstract: A general purpose PicoEngine Multi-Processor (PEMP) includes a hierarchically organized pool of small specialized picoengine processors and associated memories. A stream of data input values is received onto the PEMP. Each input data value is characterized, and from the characterization a task is determined. Picoengines are selected in a sequence. When the next picoengine in the sequence is available, it is then given the input data value along with an associated task assignment. The picoengine then performs the task. An output picoengine selector selects picoengines in the same sequence. If the next picoengine indicates that it has completed its assigned task, then the output value from the selected picoengine is output from the PEMP. By changing the sequence used, more or less of the processing power and memory resources of the pool is brought to bear on the incoming data stream. The PEMP automatically disables unused picoengines and memories.

    Abstract translation: 通用PicoEngine多处理器(PEMP)包括一个分层组织的小型专用微型引擎处理器和相关存储器的池。 数据输入值流被接收到PEMP上。 每个输入数据值被表征,并且从表征确定任务。 Picoengines按顺序选择。 当序列中的下一个微型引擎可用时,然后给出输入数据值以及相关的任务分配。 picoengine然后执行任务。 输出微型引擎选择器以相同的顺序选择微型引线。 如果下一个微微引擎指示它已经完成其分配的任务,则从PEMP输出所选择的微微引擎的输出值。 通过改变所使用的顺序,或多或少地将该池的处理能力和存储器资源承担在输入数据流上。 PEMP自动禁用未使用的打印机和内存。

    Staggered island structure in an island-based network flow processor

    公开(公告)号:US09330041B1

    公开(公告)日:2016-05-03

    申请号:US14556147

    申请日:2014-11-29

    Inventor: Gavin J. Stark

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit includes rectangular islands disposed in rows. In one example, the configurable mesh data bus is configurable to form a command/push/pull data bus over which multiple transactions can occur simultaneously on different parts of the integrated circuit. The rectangular islands of one row are oriented in staggered relation with respect to the rectangular islands of the next row. The left and right edges of islands in a row align with left and right edges of islands two rows down in the row structure. The data bus involves multiple meshes. In each mesh, the island has a centrally located crossbar switch and six radiating half links, and half links down to functional circuitry of the island. The staggered orientation of the islands, and the structure of the half links, allows half links of adjacent islands to align with one another.

    Transactional memory that performs a PMM 32-bit lookup operation
    124.
    发明授权
    Transactional memory that performs a PMM 32-bit lookup operation 有权
    执行PMM 32位查找操作的事务内存

    公开(公告)号:US09311004B1

    公开(公告)日:2016-04-12

    申请号:US14588342

    申请日:2014-12-31

    Inventor: Gavin J. Stark

    Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a memory address. In response to the command, the TM pulls an input value (IV). The memory address is used to read a word containing multiple result values (RVs), multiple reference values, and multiple prefix values from memory. A selecting circuit within the TM uses a starting bit position and a mask size to select a portion of the IV. The portion of the IV is a lookup key value (LKV). Mask values are generated based on the prefix values. The LKV is masked by each mask value thereby generating multiple masked values that are compared to the reference values. Based on the comparison a lookup table generates a selector value that is used to select a result value. The selected result value is then communicated to the processor via the bus.

    Abstract translation: 事务存储器(TM)从处理器接收总线上的查找命令。 该命令包括一个内存地址。 响应该命令,TM拉动输入值(IV)。 存储器地址用于从存储器读取包含多个结果值(RV),多个引用值和多个前缀值的单词。 TM内的选择电路使用起始位位置和掩码大小来选择IV的一部分。 IV的部分是查询键值(LKV)。 基于前缀值生成掩码值。 LKV由每个掩码值屏蔽,从而产生与参考值进行比较的多个掩蔽值。 基于比较,查找表生成用于选择结果值的选择器值。 所选择的结果值然后经由总线传送到处理器。

    CHAINED CPP COMMAND
    125.
    发明申请
    CHAINED CPP COMMAND 有权
    链接的CPP命令

    公开(公告)号:US20160085701A1

    公开(公告)日:2016-03-24

    申请号:US14492015

    申请日:2014-09-20

    CPC classification number: G06F13/28 G06F12/1081 G06F13/1642 G06F13/4027

    Abstract: A chained Command/Push/Pull (CPP) bus command is output by a first device and is sent from a CPP bus master interface across a set of command conductors of a CPP bus to a second device. The chained CPP command includes a reference value. The second device decodes the command, in response determines a plurality of CPP commands, and outputs the plurality of CPP commands onto the CPP bus. The second device detects when the plurality of CPP commands have been completed, and in response returns the reference value back to the CPP bus master interface of the first device via a set of data conductors of the CPP bus. The reference value indicates to the first device that an overall operation of the chained CPP command has been completed.

    Abstract translation: 链路命令/推/拉(CPP)总线命令由第一设备输出,并且从CPP总线主接口跨CPP总线的一组命令导体发送到第二设备。 链接的CPP命令包含一个参考值。 第二装置解码命令,作为响应确定多个CPP命令,并将多个CPP命令输出到CPP总线上。 第二设备检测多个CPP命令何时已经完成,并且响应于参考值经由CPP总线的一组数据导体将参考值返回到第一设备的CPP总线主接口。 参考值向第一个设备指示链接的CPP命令的整体操作已经完成。

    PPI ALLOCATION REQUEST AND RESPONSE FOR ACCESSING A MEMORY SYSTEM
    126.
    发明申请
    PPI ALLOCATION REQUEST AND RESPONSE FOR ACCESSING A MEMORY SYSTEM 有权
    PPI分配请求和用于访问存储系统的响应

    公开(公告)号:US20160057079A1

    公开(公告)日:2016-02-25

    申请号:US14464692

    申请日:2014-08-20

    CPC classification number: H04L49/3072 H04L45/742 H04L49/9042

    Abstract: Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. The PDRSDs use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. A PDRSD requests a PPI from the packet engine in a PPI allocation request, and is allocated a PPI by the packet engine in a PPI allocation response, and then tags the packet portion to be written with the PPI and sends the packet portion and the PPI to the packet engine.

    Abstract translation: 在网络设备内,来自多个PDRSD(分组数据接收和分离设备)的分组部分被加载到单个存储器中,使得分组部分稍后可以由处理设备处理。 管理和处理分组部分存储到存储器中的PDRSD不是提供分组引擎。 PDRSD在与分组引擎通信并指示分组引擎存储分组部分时使用PPI(分组部分标识符)寻址模式(PAM)。 PDRSD在PPI分配请求中从分组引擎请求PPI,并且在PPI分配响应中由分组引擎分配PPI,然后标记要用PPI写入的分组部分,并发送分组部分和PPI 到包引擎。

    USING A CREDITS AVAILABLE VALUE IN DETERMINING WHETHER TO ISSUE A PPI ALLOCATION REQUEST TO A PACKET ENGINE
    127.
    发明申请
    USING A CREDITS AVAILABLE VALUE IN DETERMINING WHETHER TO ISSUE A PPI ALLOCATION REQUEST TO A PACKET ENGINE 有权
    使用可用价值确定无论是否向包装发动机发出PPI分配请求

    公开(公告)号:US20160055111A1

    公开(公告)日:2016-02-25

    申请号:US14591003

    申请日:2015-01-07

    CPC classification number: G06F13/4022 G06F13/4027 G06F13/4221

    Abstract: In response to receiving a novel “Return Available PPI Credits” command from a credit-aware device, a packet engine sends a “Credit To Be Returned” (CTBR) value it maintains for that device back to the credit-aware device, and zeroes out its stored CTBR value. The credit-aware device adds the credits returned to a “Credits Available” value it maintains. The credit-aware device uses the “Credits Available” value to determine whether it can issue a PPI allocation request. The “Return Available PPI Credits” command does not result in any PPI allocation or de-allocation. In another novel aspect, the credit-aware device is permitted to issue one PPI allocation request to the packet engine when its recorded “Credits Available” value is zero or negative. If the PPI allocation request cannot be granted, then it is buffered in the packet engine, and is resubmitted within the packet engine, until the packet engine makes the PPI allocation.

    Abstract translation: 响应于从信用感知设备接收到一个新颖的“可返回PPI信用”命令,数据包引擎将为该设备维护的“信用回报”(CTBR)值发送回信用感知设备,并将零值 存储CTBR值。 信用感知设备将返回的信用额度添加到维护的“可用信用额”值。 信用感知设备使用“可用点数”值来确定是否可以发出PPI分配请求。 “可返回可用的PPI积分”命令不会导致任何PPI分配或解除分配。 在另一个新颖的方面,当信用感知设备的记录“可用可用”值为零或否定时,信用感知设备被允许向分组引擎发出一个PPI分配请求。 如果PPI分配请求不能被授权,则缓冲在分组引擎中,并在分组引擎内重新提交,直到分组引擎进行PPI分配。

    PICOENGINE HAVING A HASH GENERATOR WITH REMAINDER INPUT S-BOX NONLINEARIZING
    128.
    发明申请
    PICOENGINE HAVING A HASH GENERATOR WITH REMAINDER INPUT S-BOX NONLINEARIZING 有权
    具有禁止输入S-BOX非线性化的HASH发生器的PICOENGINE

    公开(公告)号:US20160034278A1

    公开(公告)日:2016-02-04

    申请号:US14448906

    申请日:2014-07-31

    Inventor: Gavin J. Stark

    Abstract: A processor includes a hash register and a hash generating circuit. The hash generating circuit includes a novel programmable nonlinearizing function circuit as well as a modulo-2 multiplier, a first modulo-2 summer, a modulor-2 divider, and a second modulo-2 summer. The nonlinearizing function circuit receives a hash value from the hash register and performs a programmable nonlinearizing function, thereby generating a modified version of the hash value. In one example, the nonlinearizing function circuit includes a plurality of separately enableable S-box circuits. The multiplier multiplies the input data by a programmable multiplier value, thereby generating a product value. The first summer sums a first portion of the product value with the modified hash value. The divider divides the resulting sum by a fixed divisor value, thereby generating a remainder value. The second summer sums the remainder value and the second portion of the input data, thereby generating a hash result.

    Abstract translation: 处理器包括散列寄存器和散列产生电路。 哈希发生电路包括一个新颖的可编程非线性函数电路以及模2乘法器,第一模2夏,模2分频器和第二模2夏。 非线性化函数电路从散列寄存器接收散列值,并执行可编程非线性函数,从而生成散列值的修改版本。 在一个示例中,非线性化功能电路包括多个可单独使能的S盒电路。 乘法器将输入数据乘以可编程乘数值,从而生成乘积值。 第一个夏季用修改的哈希值来计算产品值的第一部分。 分频器将结果总和除以固定除数值,从而生成余数值。 第二个夏天将剩余值和输入数据的第二部分相加,从而生成散列结果。

    Hierarchical resource pools in a linker
    129.
    发明授权
    Hierarchical resource pools in a linker 有权
    链接器中的分层资源池

    公开(公告)号:US09223555B2

    公开(公告)日:2015-12-29

    申请号:US14074623

    申请日:2013-11-07

    CPC classification number: G06F8/54 G06F8/45 G06F8/453

    Abstract: A novel declare instruction can be used in source code to declare a sub-pool of resource instances to be taken from the resource instances of a larger declared pool. Using such declare instructions, a hierarchy of pools and sub-pools can be declared. A novel allocate instruction can then be used in the source code to instruct a novel linker to make resource instance allocations from a desired pool or a desired sub-pool of the hierarchy. After compilation, the declare and allocate instructions appear in the object code. The linker uses the declare and allocate instructions in the object code to set up the hierarchy of pools and to make the indicated allocations of resource instances to symbols. After resource allocation, the linker replaces instances of a symbol in the object code with the address of the allocated resource instance, thereby generating executable code.

    Abstract translation: 源代码中可以使用一个新颖的声明指令来声明从更大的声明池的资源实例获取的资源实例的子池。 使用这样的声明指令,可以声明池和子池的层次结构。 然后可以在源代码中使用新颖的分配指令来指示新颖的链接器从期望的池或层级的期望子池进行资源实例分配。 在编译之后,声明和分配指令将出现在目标代码中。 链接器使用对象代码中的声明和分配指令来设置池的层次结构,并将资源实例的指定分配指定为符号。 资源分配后,链接器将使用分配的资源实例的地址替换目标代码中的符号实例,从而生成可执行代码。

    Recursive use of multiple hardware lookup structures in a transactional memory

    公开(公告)号:US09176905B1

    公开(公告)日:2015-11-03

    申请号:US14724824

    申请日:2015-05-29

    Inventor: Gavin J. Stark

    Abstract: A lookup engine of a transactional memory (TM) has multiple hardware lookup structures, each usable to perform a different type of lookup. In response to a lookup command, the lookup engine reads a first block of first information from a memory unit. The first information configures the lookup engine to perform a first type of lookup, thereby identifying a first result value. If the first result value is not a final result value, then the lookup engine uses address information in the first result value to read a second block of second information. The second information configures the lookup engine to perform a second type of lookup, thereby identifying a second result value. This process repeats until a final result value is obtained. The type of lookup performed is determined by the result value of the preceding lookup and/or type information of the block of information for the next lookup.

Patent Agency Ranking