Island-based network flow processor with efficient search key processing
    141.
    发明授权
    Island-based network flow processor with efficient search key processing 有权
    基于岛屿的网络流处理器,具有高效的搜索关键处理

    公开(公告)号:US09594706B2

    公开(公告)日:2017-03-14

    申请号:US14326381

    申请日:2014-07-08

    Inventor: Rick Bouley

    CPC classification number: G06F13/28 G06F12/1081 G06F13/1663 G06F13/4221

    Abstract: A Island-Based Network Flow Processor (IBNFP) includes a memory and a processor located on a first island, a Direct Memory Access (DMA) controller located on a second island, and an Interlaken Look-Aside (ILA) interface circuit and an interface circuit located on a third island. A search key data set including multiple search keys is stored in the memory. A descriptor is generated by the processor and is sent to the DMA controller, which generates a search key data request, receives the search key data set, and selects a single search key. The ILA interface circuit receives the search key, generates and ILA packet including the search key that is sent to an external transactional memory device that generates a result data value. The DMA controller receives the result data value via the ILA interface circuit, writes the result data value to the memory, and sends a DMA completion notification.

    Abstract translation: 基于岛屿的网络流处理器(IBNFP)包括位于第一岛上的存储器和处理器,位于第二岛上的直接存储器访问(DMA)控制器,以及Interlaken Look-Aside(ILA)接口电路和接口 电路位于第三个岛上。 包括多个搜索键的搜索关键数据集存储在存储器中。 描述符由处理器生成,并被发送到DMA控制器,其产生搜索关键字数据请求,接收搜索关键字数据集,并选择单个搜索关键字。 ILA接口电路接收搜索关键字,生成包含发送到产生结果数据值的外部事务存储器件的搜索关键字的ILA分组。 DMA控制器通过ILA接口电路接收结果数据值,将结果数据值写入存储器,并发送DMA完成通知。

    Guaranteed in-order packet delivery
    142.
    发明授权
    Guaranteed in-order packet delivery 有权
    保证按顺序分组传送

    公开(公告)号:US09584637B2

    公开(公告)日:2017-02-28

    申请号:US14184455

    申请日:2014-02-19

    Abstract: Circuitry to provide in-order packet delivery. A packet descriptor including a sequence number is received. It is determined in which of three ranges the sequence number resides. Depending, at least in part, on the range in which the sequence number resides it is determined if the packet descriptor is to be communicated to a scheduler which causes an associated packet to be transmitted. If the sequence number resides in a first “flush” range, all associated packet descriptors are output. If the sequence number resides in a second “send” range, only the received packet descriptor is output. If the sequence number resides in a third “store and reorder” range and the sequence number is the next in-order sequence number the packet descriptor is output; if the sequence number is not the next in-order sequence number the packet descriptor is stored in a buffer and a corresponding valid bit is set.

    Abstract translation: 电路提供按顺序分组传送。 接收包括序列号的分组描述符。 确定序列号所在的三个范围中的哪一个。 至少部分地依赖于序列号所在的范围,确定分组描述符是否被传送到导致相关分组被发送的调度器。 如果序列号位于第一个“刷新”范围内,则输出所有关联的数据包描述符。 如果序列号位于第二个“发送”范围内,则仅输出接收到的包描述符。 如果序列号位于第三个“存储和重新排序”范围,并且序列号是下一个顺序序列号,则输出数据包描述符; 如果序列号不是下一个顺序序列号,则分组描述符被存储在缓冲器中并且相应的有效位被置位。

    Transactional memory that performs a programmable address translation if a DAT bit in a transactional memory write command is set
    143.
    发明授权
    Transactional memory that performs a programmable address translation if a DAT bit in a transactional memory write command is set 有权
    如果事务存储器写命令中的DAT位被置位,则执行可编程地址转换的事务存储器

    公开(公告)号:US09535851B2

    公开(公告)日:2017-01-03

    申请号:US14172856

    申请日:2014-02-04

    Abstract: A transactional memory receives a command, where the command includes an address and a novel DAT (Do Address Translation) bit. If the DAT bit is set and if the transactional memory is enabled to do address translations and if the command is for an access (read or write) of a memory of the transactional memory, then the transactional memory performs an address translation operation on the address of the command. Parameters of the address translation are programmable and are set up before the command is received. In one configuration, certain bits of the incoming address are deleted, and other bits are shifted in bit position, and a base address is ORed in, and a padding bit is added, thereby generating the translated address. The resulting translated address is then used to access the memory of the transactional memory to carry out the command.

    Abstract translation: 事务存储器接收命令,其中命令包括地址和小写的DAT(地址转换)位。 如果DAT位被设置,并且事务存储器被使能以进行地址转换,并且如果该命令用于访问(读或写)事务存储器的存储器,则事务存储器对该地址进行地址转换操作 的命令。 地址转换的参数是可编程的,并在接收到命令之前设置。 在一种配置中,输入地址的某些比特被删除,并且其他比特在比特位置被移位,并且基地址被加入,并且添加一个填充比特,从而生成翻译的地址。 然后,所得到的转换地址用于访问事务存储器的存储器以执行命令。

    Picoengine instruction that controls an intelligent packet data register file prefetch function
    144.
    发明授权
    Picoengine instruction that controls an intelligent packet data register file prefetch function 有权
    Picoengine指令控制智能包数据寄存器文件预取功能

    公开(公告)号:US09519484B1

    公开(公告)日:2016-12-13

    申请号:US14530764

    申请日:2014-11-02

    Inventor: Gavin J. Stark

    Abstract: A multi-processor includes a pool of processors and a common packet buffer memory. Bytes of packet data of a packet are stored in the packet buffer memory. Each processor has an intelligent packet data register file. One processor is tasked with processing the packet, and its packet data register file caches a subset of the bytes. If the register file detects a packet data prefetch trigger condition, and it does not store some of the bytes in a prefetch window, then it prefetches the bytes before such bytes are required in the execution of a subsequent instruction. The processor has instructions that configure the prefetching, that enable such prefetching, and that disable such prefetching in certain ways.

    Abstract translation: 多处理器包括处理器池和公共包缓冲存储器。 分组的分组数据的字节存储在分组缓冲存储器中。 每个处理器都有一个智能包数据寄存器文件。 一个处理器的任务是处理分组,并且其分组数据寄存器文件缓存一个字节的子集。 如果寄存器文件检测到分组数据预取触发条件,并且它不在预取窗口中存储一些字节,则在执行后续指令之前需要这些字节之前预取字节。 处理器具有配置预取的指令,这些指令启用这种预取,并且以某些方式禁用这种预取。

    Efficient conditional instruction having companion load predicate bits instruction
    145.
    发明授权
    Efficient conditional instruction having companion load predicate bits instruction 有权
    具有伴随负载谓词比特指令的有效条件指令

    公开(公告)号:US09519482B2

    公开(公告)日:2016-12-13

    申请号:US14311225

    申请日:2014-06-20

    Inventor: Gavin J. Stark

    Abstract: A pipelined run-to-completion processor can decode three instructions in three consecutive clock cycles, and can also execute the instructions in three consecutive clock cycles. The first instruction causes the ALU to generate a value which is then loaded due to execution of the first instruction into a register of a register file. The second instruction accesses the register and loads the value into predicate bits in a register file read stage. The predicate bits are loaded in the very next clock cycle following the clock cycle in which the second instruction was decoded. The third instruction is a conditional instruction that uses the values of the predicate bits as a predicate code to determine a predicate function. If a predicate condition (as determined by the predicate function as applied to flags) is true then an instruction operation of the third instruction is carried out, otherwise it is not carried out.

    Abstract translation: 流水线运行到完成处理器可以在三个连续的时钟周期中解码三条指令,并且还可以在三个连续的时钟周期内执行指令。 第一条指令使得ALU产生一个值,该值由于执行第一条指令而被加载到寄存器文件的寄存器中。 第二条指令访问寄存器,并将值加载到寄存器文件读取阶段的谓词位。 谓词位在第二条指令被解码的时钟周期之后的下一个时钟周期中被加载。 第三条指令是使用谓词位的值作为谓词代码来确定谓词函数的条件指令。 如果谓词条件(由标记的谓词函数确定)为真,则执行第三条指令的指令操作,否则不执行。

    Traffic data pre-filtering
    146.
    发明授权
    Traffic data pre-filtering 有权
    流量数据进行预过滤

    公开(公告)号:US09515929B2

    公开(公告)日:2016-12-06

    申请号:US13929809

    申请日:2013-06-28

    CPC classification number: H04L45/745 H04L49/00

    Abstract: A network appliance includes a first and second compliance checker and an action identifier. Each compliance checker includes a first and second lookup operator. Traffic data is received by the network appliance. A field within the traffic data is separated into a first and second subfield. The first lookup operator performs a lookup operation on the first subfield of the traffic data and generates a first lookup result. The second lookup operator performs a lookup operation on the second subfield of the traffic data and generates a second lookup result. A compliance result is generated by a lookup result analyzer based on the first and second lookup results. An action is generated by an action identifier based at least in part on the compliance result. The action indicates whether or not additional inspection of the traffic data is required. The first and second lookup operators may perform different lookup methodologies.

    Abstract translation: 网络设备包括第一和第二顺应性检查器和动作标识符。 每个符合性检查器包括第一和第二查找运算符。 流量数据由网络设备接收。 业务数据内的字段被分成第一和第二子字段。 第一查询运算符对业务数据的第一子字段执行查找操作,并生成第一查找结果。 第二查找运算符对业务数据的第二子字段执行查找操作,并产生第二查找结果。 基于第一和第二查找结果的查找结果分析器生成合规结果。 至少部分地基于合规结果,由动作标识符生成动作。 该动作指示是否需要额外检查交通数据。 第一和第二查找运算符可以执行不同的查找方法。

    Transactional memory having local CAM and NFA resources
    147.
    发明授权
    Transactional memory having local CAM and NFA resources 有权
    具有本地CAM和NFA资源的事务性内存

    公开(公告)号:US09465651B2

    公开(公告)日:2016-10-11

    申请号:US14151677

    申请日:2014-01-09

    CPC classification number: G06F9/467

    Abstract: A remote processor interacts with a transactional memory that has a memory, local BWC (Byte-Wise Compare) resources, and local NFA (Non-deterministic Finite Automaton) engine resources. The processor causes a byte stream to be transferred into the transactional memory and into the memory. The processor then uses the BWC circuit to find a character signature in the byte stream. The processor obtains information about the character signature from the BWC circuit, and based on the information uses the NFA engine to process the byte stream starting at a byte position determined based at least in part on the results of the BWC circuit. From the time the byte stream is initially written into the transactional memory until the time the NFA engine completes, the byte stream is not read out of the transactional memory.

    Abstract translation: 远程处理器与具有内存,本地BWC(Byte-Wise Compare)资源和本地NFA(非确定性有限自动机)引擎资源的事务内存进行交互。 处理器使得字节流被传送到事务存储器并进入存储器。 然后,处理器使用BWC电路在字节流中找到字符签名。 处理器从BWC电路获得关于字符签名的信息,并且基于该信息使用NFA引擎来处理从至少部分基于BWC电路的结果确定的字节位置开始的字节流。 从字节流最初写入事务存储器直到NFA引擎完成时,字节流不会从事务存储器中读出。

    Intelligent packet data register file that prefetches data for future instruction execution
    148.
    发明授权
    Intelligent packet data register file that prefetches data for future instruction execution 有权
    智能分组数据寄存器文件,用于预取数据以供将来执行指令

    公开(公告)号:US09417916B1

    公开(公告)日:2016-08-16

    申请号:US14530763

    申请日:2014-11-02

    Inventor: Gavin J. Stark

    Abstract: A multi-processor includes a pool of processors and a common packet buffer memory. Bytes of packet data of a packet are stored in the packet buffer memory. Each processor has an intelligent packet data register file. One processor is tasked with processing the packet, and its packet data register file caches a subset of the bytes. Some instructions when executed require that the packet data register file supply the execute stage of the processor with certain bytes of the packet data. The register file detects a packet data prefetch trigger condition, and in response determines if it does not store some of the bytes in a prefetch window. If it does not, then it retrieves those bytes from the packet buffer memory, so that it then has all the bytes in the prefetch window. In one example, a subsequently executed instruction uses the prefetched packet data.

    Abstract translation: 多处理器包括处理器池和公共包缓冲存储器。 分组的分组数据的字节存储在分组缓冲存储器中。 每个处理器都有一个智能包数据寄存器文件。 一个处理器的任务是处理分组,并且其分组数据寄存器文件缓存一个字节的子集。 执行时的一些指令要求分组数据寄存器文件以分组数据的特定字节提供处理器的执行级。 寄存器文件检测分组数据预取触发条件,并且响应确定它是否在预取窗口中不存储一些字节。 如果没有,则从包缓冲存储器中检索这些字节,以便它在预取窗口中具有所有字节。 在一个示例中,随后执行的指令使用预取的分组数据。

    Inter-packet interval prediction operating algorithm
    150.
    发明授权
    Inter-packet interval prediction operating algorithm 有权
    分组间间隔预测操作算法

    公开(公告)号:US09344384B2

    公开(公告)日:2016-05-17

    申请号:US13675453

    申请日:2012-11-13

    Abstract: An appliance receives packets that are part of a flow pair, each packet sharing an application protocol. The appliance determines an estimated application protocol of the packets without performing deep packet inspection on any packets. The estimated application protocol may be determined by using an application protocol estimation table. The appliance then predicts the inter-packet interval between a packet previously received by the appliance and a next packet not yet received by the appliance. The inter-packet interval may be determined by using an inter-packet interval prediction table. The appliance then preloads packet flow data in a cache before the next packet is predicted to arrive at the appliance. Upon receiving the next packet, the packet flow data is preloaded in the cache. This reduces packet processing time by removing waiting periods previously required to cache packet flow data from an external memory after receiving the next packet.

    Abstract translation: 设备接收作为流对的一部分的数据包,每个数据包共享一个应用协议。 设备在不对任何数据包执行深度包检测的情况下确定数据包的估计应用协议。 估计的应用协议可以通过使用应用协议估计表来确定。 然后,设备预测由设备先前接收的分组与尚未由设备接收的下一个分组之间的分组间间隔。 可以通过使用分组间间隔预测表来确定分组间间隔。 然后,设备在下一个数据包预计到达设备之前,预先在缓存中加载数据包流数据。 在接收到下一个分组时,分组流数据被预加载在高速缓存中。 这通过在接收到下一个分组之后消除先前从外部存储器缓存分组流数据所需的等待时间来减少分组处理时间。

Patent Agency Ranking