Global random early detection packet dropping based on available memory
    201.
    发明授权
    Global random early detection packet dropping based on available memory 有权
    基于可用内存的全局随机早期检测分组丢弃

    公开(公告)号:US09590926B2

    公开(公告)日:2017-03-07

    申请号:US14507621

    申请日:2014-10-06

    CPC classification number: H04L49/9084

    Abstract: An apparatus and method for receiving a packet descriptor and a queue number that indicates a queue stored within a memory unit, determining a first amount of free memory in a group of packet descriptor queues, determining if the first amount of free memory is within a first range, applying a first drop probability to determine if the packet associated with the packet descriptor should be dropped when the first amount of free memory is within the first range, and applying a second drop probability to determine if the packet should be dropped when the first amount of free memory is within a second range. When it is determined that the packet is to be dropped, the packet descriptor is not stored in the queue. When it is determined that the packet is not to be dropped, the packet descriptor is stored in the queue.

    Abstract translation: 一种用于接收分组描述符和指示存储在存储器单元中的队列的队列号的装置和方法,确定一组分组描述符队列中的第一空闲存储器量,确定第一量的可用存储器是否在第一 范围,应用第一丢弃概率来确定当所述第一空闲内存量在所述第一范围内时是否应该丢弃与所述分组描述符相关联的分组,以及应用第二丢弃概率来确定当所述第一丢弃概率是否在所述第一 可用内存量在第二范围内。 当确定要丢弃分组时,分组描述符不存储在队列中。 当确定分组不被丢弃时,分组描述符被存储在队列中。

    Unique packet multicast packet ready command
    202.
    发明授权
    Unique packet multicast packet ready command 有权
    唯一包组播数据包就绪命令

    公开(公告)号:US09588928B1

    公开(公告)日:2017-03-07

    申请号:US14530759

    申请日:2014-11-02

    CPC classification number: G06F13/4027 G06F3/0613 G06F3/0647 G06F3/0683

    Abstract: A method of performing an unique packet multicast packet ready command (unique packet multicast mode operation) is described herein. A packet ready command is received from a memory system via a bus and onto a network interface circuit. The packet ready command includes a multicast value. A communication mode is determined as a function of the multicast value. The multicast value indicates a plurality of packets are to be communicated to a plurality of destinations by the network interface circuit, and each of the plurality of packets are unique. A free packet command is output from the network interface circuit onto the bus. The free packet command includes a Free On Last Transfer (FOLT) value that indicates that the packets are to be freed from the memory system by the network interface circuit after the packets are communicated to the network interface circuit.

    Abstract translation: 本文描述了执行唯一分组多播分组准备命令(唯一分组多播模式操作)的方法。 经由总线和网络接口电路从存储器系统接收到包就绪命令。 分组就绪命令包括多播值。 通信模式被确定为多播值的函数。 组播值表示多个分组将被网络接口电路传送到多个目的地,并且多个分组中的每一个是唯一的。 一个空闲的分组命令从网络接口电路输出到总线上。 空闲分组命令包括自由最后传输(FOLT)值,其指示在分组被传送到网络接口电路之后,网络接口电路将分组从存储器系统中释放出来。

    Generating a hash using S-box nonlinearizing of a remainder input
    203.
    发明授权
    Generating a hash using S-box nonlinearizing of a remainder input 有权
    使用余弦输入的S-box非线性化生成散列

    公开(公告)号:US09577832B2

    公开(公告)日:2017-02-21

    申请号:US14448980

    申请日:2014-07-31

    Inventor: Gavin J. Stark

    CPC classification number: H04L9/3239 G09C1/00 H04L9/0643 H04L2209/12

    Abstract: A processor includes a hash register and a hash generating circuit. The hash generating circuit includes a novel programmable nonlinearizing function circuit as well as a modulo-2 multiplier, a first modulo-2 summer, a modulor-2 divider, and a second modulo-2 summer. The nonlinearizing function circuit receives a hash value from the hash register and performs a programmable nonlinearizing function, thereby generating a modified version of the hash value. In one example, the nonlinearizing function circuit includes a plurality of separately enableable S-box circuits. The multiplier multiplies the input data by a programmable multiplier value, thereby generating a product value. The first summer sums a first portion of the product value with the modified hash value. The divider divides the resulting sum by a fixed divisor value, thereby generating a remainder value. The second summer sums the remainder value and the second portion of the input data, thereby generating a hash result.

    Abstract translation: 处理器包括散列寄存器和散列产生电路。 哈希发生电路包括一个新颖的可编程非线性函数电路以及模2乘法器,第一模2夏,模2分频器和第二模2夏。 非线性化函数电路从散列寄存器接收散列值,并执行可编程非线性函数,从而生成散列值的修改版本。 在一个示例中,非线性化功能电路包括多个可单独使能的S盒电路。 乘法器将输入数据乘以可编程乘数值,从而生成乘积值。 第一个夏季用修改的哈希值来计算产品值的第一部分。 分频器将结果总和除以固定除数值,从而生成余数值。 第二个夏天将剩余值和输入数据的第二部分相加,从而生成散列结果。

    Processor having a tripwire bus port and executing a tripwire instruction
    204.
    发明授权
    Processor having a tripwire bus port and executing a tripwire instruction 有权
    具有tripwire总线端口并执行tripwire指令的处理器

    公开(公告)号:US09489202B2

    公开(公告)日:2016-11-08

    申请号:US14311212

    申请日:2014-06-20

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor has a special tripwire bus port and executes a novel tripwire instruction. Execution of the tripwire instruction causes the processor to output a tripwire value onto the port during a clock cycle when the tripwire instruction is being executed. A first multi-bit value of the tripwire value is data that is output from registers, and/or flags, and/or pointers, and/or data values stored in the pipeline. A field of the tripwire instruction specifies what particular stored values will be output as the first multi-bit value. A second multi-bit value of the tripwire value is a number that identifies the particular processor that output the tripwire value. The processor has a TE enable/disable control bit. This bit is programmable by a special instruction to disable all tripwire instructions. If disabled, a tripwire instruction is fetched and decoded but does not cause the output of a tripwire value.

    Abstract translation: 流水线运行到完成处理器具有特殊的tripwire总线端口,并执行新颖的tripwire指令。 在执行tripwire指令时,执行tripwire指令会导致处理器在时钟周期内向端口输出绊线值。 tripwire值的第一多位值是从存储在流水线中的寄存器,/或标志,/或指针和/或数据值输出的数据。 tripwire指令的字段指定将作为第一个多位值输出什么特定的存储值。 tripwire值的第二个多位值是识别输出绊线值的特定处理器的数字。 处理器具有TE使能/禁止控制位。 该位可通过特殊指令进行编程,以禁用所有tripwire指令。 如果禁用,则取出并解码tripwire指令,但不会导致tripwire值的输出。

    Picoengine multi-processor with power control management
    205.
    发明授权
    Picoengine multi-processor with power control management 有权
    Picoengine多处理器具有电源控制管理功能

    公开(公告)号:US09483439B2

    公开(公告)日:2016-11-01

    申请号:US14251599

    申请日:2014-04-12

    Inventor: Gavin J. Stark

    Abstract: A general purpose PicoEngine Multi-Processor (PEMP) includes a hierarchically organized pool of small specialized picoengine processors and associated memories. A stream of data input values is received onto the PEMP. Each input data value is characterized, and from the characterization a task is determined. Picoengines are selected in a sequence. When the next picoengine in the sequence is available, it is then given the input data value along with an associated task assignment. The picoengine then performs the task. An output picoengine selector selects picoengines in the same sequence. If the next picoengine indicates that it has completed its assigned task, then the output value from the selected picoengine is output from the PEMP. By changing the sequence used, more or less of the processing power and memory resources of the pool is brought to bear on the incoming data stream. The PEMP automatically disables unused picoengines and memories.

    Abstract translation: 通用PicoEngine多处理器(PEMP)包括一个分层组织的小型专用微型引擎处理器和相关存储器的池。 数据输入值流被接收到PEMP上。 每个输入数据值被表征,并且从表征确定任务。 Picoengines按顺序选择。 当序列中的下一个微型引擎可用时,然后给出输入数据值以及相关的任务分配。 picoengine然后执行任务。 输出微型引擎选择器以相同的顺序选择微型引线。 如果下一个微微引擎指示它已经完成其分配的任务,则从PEMP输出所选择的微微引擎的输出值。 通过改变所使用的顺序,或多或少地将该池的处理能力和存储器资源承担在输入数据流上。 PEMP自动禁用未使用的打印机和内存。

    Commonality of memory island interface and structure
    206.
    发明授权
    Commonality of memory island interface and structure 有权
    记忆岛界面和结构的共性

    公开(公告)号:US09405713B2

    公开(公告)日:2016-08-02

    申请号:US13399915

    申请日:2012-02-17

    CPC classification number: G06F13/20 G06F13/385

    Abstract: The functional circuitry of a network flow processor is partitioned into a number of rectangular islands. The islands are disposed in rows. A configurable mesh data bus extends through the islands. A first island includes a first memory and a first data bus interface. A second island includes a processor, a second memory, and a second data bus interface. The processor can issue a command for a target memory to do an action. If a field in the command has a first value then the target memory is the first memory, whereas if the field has a second value then the target memory is in the second memory. The command format is the same, regardless of whether the target memory is local or remote. If the target memory is remote, then a data bus bridge adds destination information before putting the command onto the global configurable mesh data bus.

    Abstract translation: 网络流处理器的功能电路被划分成多个矩形岛。 这些岛屿排列成行。 可配置的网状数据总线延伸穿过岛。 第一岛包括第一存储器和第一数据总线接口。 第二岛包括处理器,第二存储器和第二数据总线接口。 处理器可以为目标内存发出一个命令来执行一个动作。 如果命令中的字段具有第一个值,则目标存储器是第一个存储器,而如果该字段具有第二个值,则目标存储器位于第二个存储器中。 命令格式相同,无论目标内存是本地还是远程目标内存。 如果目标存储器是远程的,则在将命令放入全局可配置的网格数据总线之前,数据总线桥接器将添加目标信息。

    Flow key lookup involving multiple simultaneous cam operations to identify hash values in a hash bucket
    207.
    发明授权
    Flow key lookup involving multiple simultaneous cam operations to identify hash values in a hash bucket 有权
    涉及多个同步凸轮操作以识别散列桶中的散列值的流键查找

    公开(公告)号:US09385957B1

    公开(公告)日:2016-07-05

    申请号:US14537514

    申请日:2014-11-10

    Abstract: A flow key is determined from an incoming packet. Two hash values A and B are then generated from the flow key. Hash value A is an index into a hash table to identify a hash bucket. Multiple simultaneous CAM lookup operations are performed on fields of the bucket to determine which ones of the fields store hash value B. For each populated field there is a corresponding entry in a key table and in other tables. The key table entry corresponding to each field that stores hash value B is checked to determine if that key table entry stores the original flow key. When the key table entry that stores the original flow key is identified, then the corresponding entries in the other tables are determined to be a “lookup output information value”. This value indicates how the packet is to be handled/forwarded by the network appliance.

    Abstract translation: 从输入包确定流密钥。 然后从流密钥生成两个散列值A和B. 哈希值A是哈希表中用于标识哈希桶的索引。 在桶的字段上执行多个同时的CAM查找操作,以确定哪些字段存储散列值B.对于每个填充字段,在键表和其他表中都有相应的条目。 检查对应于存储散列值B的每个字段的密钥表条目,以确定该密钥表条目是否存储原始流密钥。 当存储原始流密钥的密钥表条目被识别时,其他表中的相应条目被确定为“查找输出信息值”。 该值指示如何由网络设备处理/转发数据包。

    Staggered island structure in an island-based network flow processor

    公开(公告)号:US09330041B1

    公开(公告)日:2016-05-03

    申请号:US14556147

    申请日:2014-11-29

    Inventor: Gavin J. Stark

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit includes rectangular islands disposed in rows. In one example, the configurable mesh data bus is configurable to form a command/push/pull data bus over which multiple transactions can occur simultaneously on different parts of the integrated circuit. The rectangular islands of one row are oriented in staggered relation with respect to the rectangular islands of the next row. The left and right edges of islands in a row align with left and right edges of islands two rows down in the row structure. The data bus involves multiple meshes. In each mesh, the island has a centrally located crossbar switch and six radiating half links, and half links down to functional circuitry of the island. The staggered orientation of the islands, and the structure of the half links, allows half links of adjacent islands to align with one another.

    Transactional memory that performs a PMM 32-bit lookup operation
    209.
    发明授权
    Transactional memory that performs a PMM 32-bit lookup operation 有权
    执行PMM 32位查找操作的事务内存

    公开(公告)号:US09311004B1

    公开(公告)日:2016-04-12

    申请号:US14588342

    申请日:2014-12-31

    Inventor: Gavin J. Stark

    Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a memory address. In response to the command, the TM pulls an input value (IV). The memory address is used to read a word containing multiple result values (RVs), multiple reference values, and multiple prefix values from memory. A selecting circuit within the TM uses a starting bit position and a mask size to select a portion of the IV. The portion of the IV is a lookup key value (LKV). Mask values are generated based on the prefix values. The LKV is masked by each mask value thereby generating multiple masked values that are compared to the reference values. Based on the comparison a lookup table generates a selector value that is used to select a result value. The selected result value is then communicated to the processor via the bus.

    Abstract translation: 事务存储器(TM)从处理器接收总线上的查找命令。 该命令包括一个内存地址。 响应该命令,TM拉动输入值(IV)。 存储器地址用于从存储器读取包含多个结果值(RV),多个引用值和多个前缀值的单词。 TM内的选择电路使用起始位位置和掩码大小来选择IV的一部分。 IV的部分是查询键值(LKV)。 基于前缀值生成掩码值。 LKV由每个掩码值屏蔽,从而产生与参考值进行比较的多个掩蔽值。 基于比较,查找表生成用于选择结果值的选择器值。 所选择的结果值然后经由总线传送到处理器。

    Dedicated egress fast path for non-matching packets in an OpenFlow switch
    210.
    发明授权
    Dedicated egress fast path for non-matching packets in an OpenFlow switch 有权
    OpenFlow交换机中的非匹配数据包的专用出口快速路径

    公开(公告)号:US09299434B2

    公开(公告)日:2016-03-29

    申请号:US14151730

    申请日:2014-01-09

    Abstract: A first packet of a flow received onto an OpenFlow switch causes a flow entry to be added to a flow table, but the associated action is to perform a TCAM lookup. A request is sent to an OpenFlow controller. A response OpenFlow message indicates an action. The response passes through a special dedicated egress fast-path such that the action is applied and the first packet is injected into the main data output path of the switch. A TCAM entry is also added that indicates the action. A second packet of the flow is then received and a flow table lookup causes a TCAM lookup, which indicates the action. The action is applied to the second packet, the packet is output from the switch, and the lookup table is updated so the flow entry will thereafter directly indicate the action. Subsequent packets of the flow do not involve TCAM lookups.

    Abstract translation: 接收到OpenFlow交换机上的流的第一个数据包会将流条目添加到流表中,但相关联的操作是执行TCAM查找。 请求被发送到OpenFlow控制器。 响应OpenFlow消息指示一个动作。 响应通过特殊的专用出口快速路径,使得应用该动作,并且将第一分组注入到交换机的主数据输出路径中。 还添加了一个表示该操作的TCAM条目。 然后接收流的第二个分组,并且流表查找导致指示该动作的TCAM查找。 该动作应用于第二个数据包,数据包从交换机输出,并且更新查找表,因此流入口随后将直接指示该动作。 流的后续数据包不涉及TCAM查找。

Patent Agency Ranking