NFA BYTE DETECTOR
    111.
    发明申请
    NFA BYTE DETECTOR 有权
    NFA字节检测器

    公开(公告)号:US20150193374A1

    公开(公告)日:2015-07-09

    申请号:US14151688

    申请日:2014-01-09

    Abstract: An automaton hardware engine employs a transition table organized into 2n rows, where each row comprises a plurality of n-bit storage locations, and where each storage location can store at most one n-bit entry value. Each row corresponds to an automaton state. In one example, at least two NFAs are encoded into the table. The first NFA is indexed into the rows of the transition table in a first way, and the second NFA is indexed in to the rows of the transition table in a second way. Due to this indexing, all rows are usable to store entry values that point to other rows.

    Abstract translation: 自动机硬件引擎采用组织成2n行的转换表,其中每行包括多个n位存储位置,并且其中每个存储位置最多可以存储一个n位输入值。 每行对应于自动机状态。 在一个示例中,至少两个NFA被编码到表中。 第一个NFA以第一种方式索引到转换表的行中,第二个NFA以第二种方式索引到转换表的行中。 由于此索引,所有行都可用于存储指向其他行的条目值。

    HIERARCHICAL RESOURCE POOLS IN A LINKER
    112.
    发明申请
    HIERARCHICAL RESOURCE POOLS IN A LINKER 有权
    链接中的分层资源池

    公开(公告)号:US20150128118A1

    公开(公告)日:2015-05-07

    申请号:US14074623

    申请日:2013-11-07

    CPC classification number: G06F8/54 G06F8/45 G06F8/453

    Abstract: A novel declare instruction can be used in source code to declare a sub-pool of resource instances to be taken from the resource instances of a larger declared pool. Using such declare instructions, a hierarchy of pools and sub-pools can be declared. A novel allocate instruction can then be used in the source code to instruct a novel linker to make resource instance allocations from a desired pool or a desired sub-pool of the hierarchy. After compilation, the declare and allocate instructions appear in the object code. The linker uses the declare and allocate instructions in the object code to set up the hierarchy of pools and to make the indicated allocations of resource instances to symbols. After resource allocation, the linker replaces instances of a symbol in the object code with the address of the allocated resource instance, thereby generating executable code.

    Abstract translation: 源代码中可以使用一个新颖的声明指令来声明从更大的声明池的资源实例获取的资源实例的子池。 使用这样的声明指令,可以声明池和子池的层次结构。 然后可以在源代码中使用新颖的分配指令来指示新颖的链接器从期望的池或层次结构的期望子池进行资源实例分配。 在编译之后,声明和分配指令将出现在目标代码中。 链接器使用对象代码中的声明和分配指令来设置池的层次结构,并将资源实例的指定分配指定为符号。 资源分配后,链接器将使用分配的资源实例的地址替换对象代码中的符号实例,从而生成可执行代码。

    HARDWARE FIRST COME FIRST SERVE ARBITER USING MULTIPLE REQUEST BUCKETS
    113.
    发明申请
    HARDWARE FIRST COME FIRST SERVE ARBITER USING MULTIPLE REQUEST BUCKETS 有权
    硬件首先使用多个请求BUCKETS

    公开(公告)号:US20150127864A1

    公开(公告)日:2015-05-07

    申请号:US14074469

    申请日:2013-11-07

    Inventor: Gavin J. Stark

    CPC classification number: G06F13/1663 G06F13/3625 G06F13/364

    Abstract: A First Come First Server (FCFS) arbiter that receives a request to utilize a shared resource from a plurality of devices and in response generates a grant value indicating if the request is granted. The FCFS arbiter includes a circuit and a storage device. The circuit receives a first request and a grant enable during a first clock cycle and outputs a grant value. The grant enable is received from a shared resource. The grant value communicated to the source of the first request. The storage device includes a plurality of request buckets. The first request is stored in a first request bucket when the first request is not granted during the first clock cycle and is moved from the first request bucket to a second request bucket when the first request is not granted during a second clock cycle. A granted request is cleared from all request buckets.

    Abstract translation: 接收来自多个设备的利用共享资源的请求并作为响应的第一起始服务器(FCFS)仲裁器生成指示是否授予请求的授权值。 FCFS仲裁器包括电路和存储设备。 电路在第一时钟周期期间接收第一请求和授权使能,并输出许可值。 从共享资源接收授权启用。 授予价值传达给第一个请求的来源。 存储装置包括多个请求桶。 当在第一时钟周期期间不授予第一请求时,第一请求被存储在第一请求桶中,并且当在第二时钟周期期间未授予第一请求时,第一请求被从第一请求桶移动到第二请求桶。 授予的请求将从所有请求存储桶中清除。

    HARDWARE PREFIX REDUCTION CIRCUIT
    114.
    发明申请
    HARDWARE PREFIX REDUCTION CIRCUIT 有权
    硬件前缀减少电路

    公开(公告)号:US20150054547A1

    公开(公告)日:2015-02-26

    申请号:US13970599

    申请日:2013-08-20

    Applicant: Gavin J. Stark

    Inventor: Gavin J. Stark

    CPC classification number: G06F9/467

    Abstract: A hardware prefix reduction circuit includes a plurality of levels. Each level includes an input conductor, an output conductor, and a plurality of nodes. Each node includes a buffer and a storage device that stores a digital logic level. One node further includes an inverter. Another node further includes an AND gate with two non-inverting inputs. Another node further includes an AND gate with an inverting input and a non-inverting input. One bit of an input value, such as an internet protocol address, is communicated on the input conductor. The first level of the prefix reduction circuit includes two nodes and each subsequent level includes twice as many nodes as is included in the preceding level. A digital logic level is individually programmed into each storage device. The digital logic levels stored in the storage devices determines the prefix reduction algorithm implemented by the hardware prefix reduction circuit.

    Abstract translation: 硬件前缀缩减电路包括多个电平。 每个级别包括输入导体,输出导体和多个节点。 每个节点包括存储数字逻辑电平的缓冲器和存储设备。 一个节点还包括一个逆变器。 另一节点还包括具有两个同相输入的“与”门。 另一个节点还包括具有反相输入和非反相输入的与门。 诸如互联网协议地址的输入值的一位在输入指示器上传送。 前缀缩减电路的第一级包括两个节点,并且每个后续级别包括在前一级中包括的两倍的节点。 数字逻辑电平被分别编程到每个存储设备中。 存储在存储装置中的数字逻辑电平确定由硬件前缀缩减电路实现的前缀缩减算法。

    PACKET PREDICTION IN A MULTI-PROTOCOL LABEL SWITCHING NETWORK USING OPENFLOW MESSAGING
    115.
    发明申请
    PACKET PREDICTION IN A MULTI-PROTOCOL LABEL SWITCHING NETWORK USING OPENFLOW MESSAGING 审中-公开
    使用开流信息传输的多协议标签交换网络中的分组预测

    公开(公告)号:US20140233394A1

    公开(公告)日:2014-08-21

    申请号:US14263999

    申请日:2014-04-28

    Abstract: A first switch in a MPLS network receives a plurality of packets. The plurality of packets are part of a pair of flows. The first switch performs a packet prediction learning algorithm on the first plurality of packets and generates packet prediction information. The first switch communicates the packet prediction information to a Network Operation Center (NOC). In response, the NOC communicates the packet prediction information to a second switch within the MPLS network utilizing OpenFlow messaging. In a first example, the NOC communicates a packet prediction control signal to the second switch. In a second example, a packet prediction control signal is not communicated. In the first example, based on the packet prediction control signal, the second switch determines if it will utilize the packet prediction information. In the second example, the second switch independently determines if the packet prediction information is to be used.

    Abstract translation: MPLS网络中的第一交换机接收多个分组。 多个分组是一对流的一部分。 第一交换机在第一多个分组上执行分组预测学习算法,并生成分组预测信息。 第一交换机将分组预测信息传送到网络操作中心(NOC)。 作为响应,NOC使用OpenFlow消息传送将分组预测信息传送到MPLS网络内的第二交换机。 在第一示例中,NOC向第二交换机传送分组预测控制信号。 在第二示例中,分组预测控制信号不被通信。 在第一示例中,基于分组预测控制信号,第二切换器确定是否将利用分组预测信息。 在第二示例中,第二开关独立地确定是否要使用分组预测信息。

    EFFICIENT FORWARDING OF ENCRYPTED TCP RETRANSMISSIONS
    116.
    发明申请
    EFFICIENT FORWARDING OF ENCRYPTED TCP RETRANSMISSIONS 有权
    加强TCP重新恢复的有效方法

    公开(公告)号:US20140195797A1

    公开(公告)日:2014-07-10

    申请号:US13737907

    申请日:2013-01-09

    CPC classification number: H04L63/0428 H04L63/168

    Abstract: A network device receives TCP segments of a flow via a first SSL session and transmits TCP segments via a second SSL session. Once a TCP segment has been transmitted, the TCP payload need no longer be stored on the network device. Substantial memory resources are conserved, because the device may have to handle many retransmit TCP segments at a given time. If the device receives a retransmit segment, then the device regenerates the retransmit segment to be transmitted. A data structure of entries is stored, with each entry including a decrypt state and an encrypt state for an associated SSL byte position. The device uses the decrypt state to initialize a decrypt engine, decrypts an SSL payload of the retransmit TCP segment received, uses the encrypt state to initialize an encrypt engine, re-encrypts the SSL payload, and then incorporates the re-encrypted SSL payload into the regenerated retransmit TCP segment.

    Abstract translation: 网络设备经由第一SSL会话接收流的TCP片段,并经由第二SSL会话传输TCP片段。 一旦TCP片段被传输,TCP有效载荷就不再需要存储在网络设备上。 大量的内存资源是保守的,因为设备可能必须在给定时间处理许多重传TCP段。 如果设备收到重传段,则设备重新生成要发送的重传段。 存储条目的数据结构,每个条目包括用于相关联的SSL字节位置的解密状态和加密状态。 该设备使用解密状态来初始化解密引擎,解密所接收的重传TCP片段的SSL有效载荷,使用加密状态初始化加密引擎,重新加密SSL有效载荷,然后将重新加密的SSL有效载荷合并到 再生的重传TCP段。

    EFFICIENT INTERCEPT OF CONNECTION-BASED TRANSPORT LAYER CONNECTIONS
    117.
    发明申请
    EFFICIENT INTERCEPT OF CONNECTION-BASED TRANSPORT LAYER CONNECTIONS 有权
    基于连接的运输层连接的有效干预

    公开(公告)号:US20140189093A1

    公开(公告)日:2014-07-03

    申请号:US13730985

    申请日:2012-12-29

    Abstract: A TCP connection is established between a client and a server, such that packets communicated across the TCP connection pass through a proxy. Based at least in part on a result of monitoring packets flowing across the TCP connection, the proxy determines whether to split the TCP control loop into two TCP control loops so that packets can be inspected more thoroughly. If the TCP control loop is split, then a first TCP control loop manages flow between the client the proxy and a second TCP control loop manages flow between the proxy and the server. Due to the two control loops, packets can be held on the proxy long enough to be analyzed. In some circumstances, a decision is then made to stop inspecting. The two TCP control loops are merged into a single TCP control loop, and thereafter the proxy passes packets of the TCP connection through unmodified.

    Abstract translation: 在客户端和服务器之间建立TCP连接,使得跨TCP连接传递的数据包通过代理。 至少部分地基于监视跨越TCP连接的数据包的结果,代理确定是否将TCP控制环分为两个TCP控制环,以便更彻底地检查数据包。 如果TCP控制循环被拆分,则第一个TCP控制循环管理客户端代理之间的流程,第二个TCP控制循环管理代理服务器和服务器之间的流程。 由于两个控制环路,数据包可以在代理上保存足够长的时间进行分析。 在某些情况下,决定停止检查。 两个TCP控制循环被合并到单个TCP控制环路中,之后代理通过未修改的方式传递TCP连接的数据包。

    Transactional Memory that Performs a Statistics Add-and-Update Operation
    118.
    发明申请
    Transactional Memory that Performs a Statistics Add-and-Update Operation 审中-公开
    执行统计添加和更新操作的事务内存

    公开(公告)号:US20140025884A1

    公开(公告)日:2014-01-23

    申请号:US13552537

    申请日:2012-07-18

    Abstract: A transactional memory (TM) of an island-based network flow processor (IB-NFP) integrated circuit receives a Stats Add-and-Update (AU) command across a command mesh of a Command/Push/Pull (CPP) data bus from a processor. A memory unit of the TM stores a plurality of first values in a corresponding set of memory locations. A hardware engine of the TM receives the AU, performs a pull across other meshes of the CPP bus thereby obtaining a set of addresses, uses the pulled addresses to read the first values out of the memory unit, adds the same second value to each of the first values thereby generating a corresponding set of updated first values, and causes the set of updated first values to be written back into the plurality of memory locations. Even though multiple count values are updated, there is only one bus transaction value sent across the CPP bus command mesh.

    Abstract translation: 基于岛屿的网络流处理器(IB-NFP)集成电路的事务存储器(TM)通过命令/推/拉(CPP)数据总线的命令网格接收统计添加和更新(AU)命令, 一个处理器 TM的存储单元将多个第一值存储在对应的一组存储器位置中。 TM的硬件引擎接收AU,执行CPP总线的其他网格的牵引,从而获得一组地址,使用拉动的地址将第一个值从存储器单元中读出,将相同的第二个值加到每个 所述第一值由此产生相应的一组更新的第一值,并且使得所述一组更新的第一值被写回到所述多个存储单元中。 即使更新了多个计数值,在CPP总线命令网格中只发送一个总线事务值。

    Recursive Lookup with a Hardware Trie Structure that has no Sequential Logic Elements
    119.
    发明申请
    Recursive Lookup with a Hardware Trie Structure that has no Sequential Logic Elements 有权
    具有没有顺序逻辑元素的硬件结构的递归查找

    公开(公告)号:US20140025858A1

    公开(公告)日:2014-01-23

    申请号:US13552555

    申请日:2012-07-18

    CPC classification number: H03K17/00 G06F9/467 G06F13/40 H04L45/745 H04L45/748

    Abstract: A hardware trie structure includes a tree of internal node circuits and leaf node circuits. Each internal node is configured by a corresponding multi-bit node control value (NCV). Each leaf node can output a corresponding result value (RV). An input value (IV) supplied onto input leads of the trie causes signals to propagate through the trie such that one of the leaf nodes outputs one of the RVs onto output leads of the trie. In a transactional memory, a memory stores a set of NCVs and RVs. In response to a lookup command, the NCVs and RVs are read out of memory and are used to configure the trie. The IV of the lookup is supplied to the input leads, and the trie looks up an RV. A non-final RV initiates another lookup in a recursive fashion, whereas a final RV is returned as the result of the lookup command.

    Abstract translation: 硬件特里结构包括一棵内部节点电路和叶节点电路。 每个内部节点由相应的多位节点控制值(NCV)配置。 每个叶节点可以输出相应的结果值(RV)。 提供给特里的输入引线的输入值(IV)使得信号通过三通传播,使得一个叶节点将其中一个RV输出到该线索的输出引线。 在事务存储器中,存储器存储一组NCV和RV。 响应于查找命令,NCV和RV从存储器中读出并用于配置特里。 查询的IV被提供给输入引线,并且特技查找RV。 非最终RV以递归方式发起另一次查找,而作为查找命令的结果返回最终RV。

    Processing resource management in an island-based network flow processor
    120.
    发明授权
    Processing resource management in an island-based network flow processor 有权
    在基于岛屿的网络流处理器中处理资源管理

    公开(公告)号:US08559436B2

    公开(公告)日:2013-10-15

    申请号:US13399958

    申请日:2012-02-17

    CPC classification number: H04L12/6418

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit has a high performance processor island. The processor island has a processor and a tightly coupled memory. The integrated circuit also has another memory. The other memory may be internal or external memory. The header of an incoming packet is stored in the tightly coupled memory of the processor island. The payload is stored in the other memory. In one example, if the amount of a processing resource is below a threshold then the header is moved from the first island to the other memory before the header and payload are communicated to an egress island for outputting from the integrated circuit. If, however, the amount of the processing resource is not below the threshold then the header is moved directly from the processor island to the egress island and is combined with the payload there for outputting from the integrated circuit.

    Abstract translation: 基于岛屿的网络流处理器(IB-NFP)集成电路具有高性能的处理器岛。 处理器岛具有处理器和紧耦合的存储器。 该集成电路还具有另一个存储器。 其他内存可能是内部或外部存储器。 输入分组的报头被存储在处理器岛的紧耦合存储器中。 有效载荷存储在另一个存储器中。 在一个示例中,如果处理资源的量低于阈值,则在首标和有效载荷被传送到出口岛以从集成电路输出之前,首标从第一岛移动到另一个存储器。 然而,如果处理资源的数量不低于阈值,则标题直接从处理器岛移动到出口岛,并且与有效载荷组合以从集成电路输出。

Patent Agency Ranking