Abstract:
A registered synchronous FIFO has a tail register, internal registers, and a head register. The FIFO cannot be pushed if it is full and cannot be popped if it is empty, but otherwise can be pushed and/or popped. Within the FIFO, the internal signal fanout of incoming data circuitry and push control circuitry and is minimized and remains essentially constant regardless of the number of registers of the FIFO. The output delay of the output data also is essentially constant regardless of the number of registers of the FIFO. An incoming data value can only be written into the head or tail. If a data value is in the tail and one of the internal registers is empty, and if no push or pop is to be performed in a clock cycle, then nevertheless the data value in the tail is moved into the empty internal register in the cycle.
Abstract:
A method for supporting in-flight packet processing is provided. Packet processing devices (microengines) can send a request for packet processing to a packet engine before a packet comes in. The request offers a twofold benefit. First, the microengines add themselves to a work queue to request for processing. Once the packet becomes available, the header portion is automatically provided to the corresponding microengine for packet processing. Only one bus transaction is involved in order for the microengines to start packet processing. Second, the microengines can process packets before the entire packet is written into the memory. This is especially useful for large sized packets because the packets do not have to be written into the memory completely when processed by the microengines.
Abstract:
An island-based network flow processor (IB-NFP) integrated circuit includes islands organized in rows. A configurable mesh event bus extends through the islands and is configured to form a local event ring. The configurable mesh event bus is configured with configuration information received via a configurable mesh control bus. The local event ring involves event ring circuits and event ring segments. In one example, a packet is received onto a first island. If an amount of a processing resource (for example, memory buffer space) available to the first island is below a threshold, then an event packet is communicated from the first island to a second island via the local event ring. In response, the second island causes a third island to communicate via a command/push/pull data bus with the first island, thereby increasing the amount of the processing resource available to the first island for handing incoming packets.
Abstract:
A dispatcher circuit receives sets of instructions from an instructing entity. Instructions of the set of a first type are put into a first queue circuit, instructions of the set of a second type are put into a second queue circuit, and so forth. The first queue circuit dispatches instructions of the first type to one or more processing engines and records when the instructions of the set are completed. When all the instructions of the set of the first type have been completed, then the first queue circuit sends the second queue circuit a go signal, which causes the second queue circuit to dispatch instructions of the second type and to record when they have been completed. This process proceeds from queue circuit to queue circuit. When all the instructions of the set have been completed, then the dispatcher circuit returns an “instructions done” to the original instructing entity.
Abstract:
An automaton hardware engine employs a transition table organized into 2n rows, where each row comprises a plurality of n-bit storage locations, and where each storage location can store at most one n-bit entry value. Each row corresponds to an automaton state. In one example, at least two NFAs are encoded into the table. The first NFA is indexed into the rows of the transition table in a first way, and the second NFA is indexed in to the rows of the transition table in a second way. Due to this indexing, all rows are usable to store entry values that point to other rows.
Abstract:
An automaton hardware engine employs a transition table organized into 2n rows, where each row comprises a plurality of n-bit storage locations, and where each storage location can store at most one n-bit entry value. Each row corresponds to an automaton state. In one example, at least two NFAs are encoded into the table. The first NFA is indexed into the rows of the transition table in a first way, and the second NFA is indexed in to the rows of the transition table in a second way. Due to this indexing, all rows are usable to store entry values that point to other rows.