Abstract:
An automaton hardware engine employs a transition table organized into 2n rows, where each row comprises a plurality of n-bit storage locations, and where each storage location can store at most one n-bit entry value. Each row corresponds to an automaton state. In one example, at least two NFAs are encoded into the table. The first NFA is indexed into the rows of the transition table in a first way, and the second NFA is indexed in to the rows of the transition table in a second way. Due to this indexing, all rows are usable to store entry values that point to other rows.
Abstract:
A novel declare instruction can be used in source code to declare a sub-pool of resource instances to be taken from the resource instances of a larger declared pool. Using such declare instructions, a hierarchy of pools and sub-pools can be declared. A novel allocate instruction can then be used in the source code to instruct a novel linker to make resource instance allocations from a desired pool or a desired sub-pool of the hierarchy. After compilation, the declare and allocate instructions appear in the object code. The linker uses the declare and allocate instructions in the object code to set up the hierarchy of pools and to make the indicated allocations of resource instances to symbols. After resource allocation, the linker replaces instances of a symbol in the object code with the address of the allocated resource instance, thereby generating executable code.
Abstract:
A First Come First Server (FCFS) arbiter that receives a request to utilize a shared resource from a plurality of devices and in response generates a grant value indicating if the request is granted. The FCFS arbiter includes a circuit and a storage device. The circuit receives a first request and a grant enable during a first clock cycle and outputs a grant value. The grant enable is received from a shared resource. The grant value communicated to the source of the first request. The storage device includes a plurality of request buckets. The first request is stored in a first request bucket when the first request is not granted during the first clock cycle and is moved from the first request bucket to a second request bucket when the first request is not granted during a second clock cycle. A granted request is cleared from all request buckets.
Abstract:
A hardware prefix reduction circuit includes a plurality of levels. Each level includes an input conductor, an output conductor, and a plurality of nodes. Each node includes a buffer and a storage device that stores a digital logic level. One node further includes an inverter. Another node further includes an AND gate with two non-inverting inputs. Another node further includes an AND gate with an inverting input and a non-inverting input. One bit of an input value, such as an internet protocol address, is communicated on the input conductor. The first level of the prefix reduction circuit includes two nodes and each subsequent level includes twice as many nodes as is included in the preceding level. A digital logic level is individually programmed into each storage device. The digital logic levels stored in the storage devices determines the prefix reduction algorithm implemented by the hardware prefix reduction circuit.
Abstract:
A first switch in a MPLS network receives a plurality of packets. The plurality of packets are part of a pair of flows. The first switch performs a packet prediction learning algorithm on the first plurality of packets and generates packet prediction information. The first switch communicates the packet prediction information to a Network Operation Center (NOC). In response, the NOC communicates the packet prediction information to a second switch within the MPLS network utilizing OpenFlow messaging. In a first example, the NOC communicates a packet prediction control signal to the second switch. In a second example, a packet prediction control signal is not communicated. In the first example, based on the packet prediction control signal, the second switch determines if it will utilize the packet prediction information. In the second example, the second switch independently determines if the packet prediction information is to be used.
Abstract:
A network device receives TCP segments of a flow via a first SSL session and transmits TCP segments via a second SSL session. Once a TCP segment has been transmitted, the TCP payload need no longer be stored on the network device. Substantial memory resources are conserved, because the device may have to handle many retransmit TCP segments at a given time. If the device receives a retransmit segment, then the device regenerates the retransmit segment to be transmitted. A data structure of entries is stored, with each entry including a decrypt state and an encrypt state for an associated SSL byte position. The device uses the decrypt state to initialize a decrypt engine, decrypts an SSL payload of the retransmit TCP segment received, uses the encrypt state to initialize an encrypt engine, re-encrypts the SSL payload, and then incorporates the re-encrypted SSL payload into the regenerated retransmit TCP segment.
Abstract:
A TCP connection is established between a client and a server, such that packets communicated across the TCP connection pass through a proxy. Based at least in part on a result of monitoring packets flowing across the TCP connection, the proxy determines whether to split the TCP control loop into two TCP control loops so that packets can be inspected more thoroughly. If the TCP control loop is split, then a first TCP control loop manages flow between the client the proxy and a second TCP control loop manages flow between the proxy and the server. Due to the two control loops, packets can be held on the proxy long enough to be analyzed. In some circumstances, a decision is then made to stop inspecting. The two TCP control loops are merged into a single TCP control loop, and thereafter the proxy passes packets of the TCP connection through unmodified.
Abstract:
A transactional memory (TM) of an island-based network flow processor (IB-NFP) integrated circuit receives a Stats Add-and-Update (AU) command across a command mesh of a Command/Push/Pull (CPP) data bus from a processor. A memory unit of the TM stores a plurality of first values in a corresponding set of memory locations. A hardware engine of the TM receives the AU, performs a pull across other meshes of the CPP bus thereby obtaining a set of addresses, uses the pulled addresses to read the first values out of the memory unit, adds the same second value to each of the first values thereby generating a corresponding set of updated first values, and causes the set of updated first values to be written back into the plurality of memory locations. Even though multiple count values are updated, there is only one bus transaction value sent across the CPP bus command mesh.
Abstract:
A hardware trie structure includes a tree of internal node circuits and leaf node circuits. Each internal node is configured by a corresponding multi-bit node control value (NCV). Each leaf node can output a corresponding result value (RV). An input value (IV) supplied onto input leads of the trie causes signals to propagate through the trie such that one of the leaf nodes outputs one of the RVs onto output leads of the trie. In a transactional memory, a memory stores a set of NCVs and RVs. In response to a lookup command, the NCVs and RVs are read out of memory and are used to configure the trie. The IV of the lookup is supplied to the input leads, and the trie looks up an RV. A non-final RV initiates another lookup in a recursive fashion, whereas a final RV is returned as the result of the lookup command.
Abstract:
An island-based network flow processor (IB-NFP) integrated circuit has a high performance processor island. The processor island has a processor and a tightly coupled memory. The integrated circuit also has another memory. The other memory may be internal or external memory. The header of an incoming packet is stored in the tightly coupled memory of the processor island. The payload is stored in the other memory. In one example, if the amount of a processing resource is below a threshold then the header is moved from the first island to the other memory before the header and payload are communicated to an egress island for outputting from the integrated circuit. If, however, the amount of the processing resource is not below the threshold then the header is moved directly from the processor island to the egress island and is combined with the payload there for outputting from the integrated circuit.