Abstract:
An integrated circuit includes ingress ethernet ports and egress ethernet ports. A second ingress ethernet port is configurable to operate in a selected one of a command mode and a data mode. The ingress ethernet port does not power up in the command mode and can only be put into the command mode as a result of a port modeset command being received onto an ingress ethernet port operating in the command mode. A first ingress ethernet port powers up in the command mode. In the command mode the first ingress ethernet port can receive and carry out a port modeset command. Receiving and carrying out of the port modeset command causes one of the ingress ethernet ports identified by the port modeset command to operate in the command mode. A flow table structure adapted to store flow entries is used to determine which egress ethernet port outputs a packet.
Abstract:
An NFA (Non-deterministic Finite Automaton) circuit includes a hardware byte characterizer, a first matching circuit (performs a TCAM match function), a second matching circuit (performs a wide match function), a multiplexer that outputs a selected output from either the first or second matching circuits, and a storage device. N data values stored in first storage locations of the storage device are supplied to the first matching circuit as an N-bit mask value and are simultaneously supplied to the second matching circuit as N bits of an N+O-bit mask value. O data values stored in second storage locations of the storage device are supplied to the first matching circuit as the O-bit match value and are simultaneously supplied to the second matching circuit as O bits of the N+O-bit mask value. P data values stored in third storage locations are supplied onto the select inputs of the multiplexer.
Abstract:
Within a networking device, packet portions from multiple PDRSDs (Packet Data Receiving and Splitting Devices) are loaded into a single memory, so that the packet portions can later be processed by a processing device. Rather than the PDRSDs managing and handling the storing of packet portions into the memory, a packet engine is provided. A device interacting with the packet engine can use a PPI (Packet Portion Identifier) Addressing Mode (PAM) in communicating with the packet engine and in instructing the packet engine to store packet portions. Alternatively, the device can use a Linear Addressing Mode (LAM) to communicate with the packet engine. A PAM/LAM selection code field in a bus transaction value sent to the packet engine indicates whether PAM or LAM will be used.
Abstract:
A transactional memory (TM) includes a control circuit pipeline and an associated memory unit. The memory unit stores a plurality of rings. The pipeline maintains, for each ring, a head pointer and a tail pointer. A ring operation stage of the pipeline maintains the pointers as values are put onto and are taken off the rings. A put command causes the TM to put a value into a ring, provided the ring is not full. A get command causes the TM to take a value off a ring, provided the ring is not empty. A put with low priority command causes the TM to put a value into a ring, provided the ring has at least a predetermined amount of free buffer space. A get from a set of rings command causes the TM to get a value from the highest priority non-empty ring (of a specified set of rings).
Abstract:
A transactional memory (TM) receives an Atomic Metering Command (AMC) across a bus from a processor. The command includes a memory address and a meter pair indicator value. In response to the AMC, the TM pulls an input value (IV). The TM uses the memory address to read a word including multiple credit values from a memory unit. Circuitry within the TM selects a pair of credit values, subtracts the IV from each of the pair of credit values thereby generating a pair of decremented credit values, compares the pair of decremented credit values with a threshold value, respectively, thereby generating a pair of indicator values, performs a lookup based upon the pair of indicator values and the meter pair indicator value, and outputs a selector value and a result value that represents a meter color. The selector value determines the credit values written back to the memory unit.
Abstract:
In response to receiving a novel “Return Available PPI Credits” command from a credit-aware device, a packet engine sends a “Credit To Be Returned” (CTBR) value it maintains for that device back to the credit-aware device, and zeroes out its stored CTBR value. The credit-aware device adds the credits returned to a “Credits Available” value it maintains. The credit-aware device uses the “Credits Available” value to determine whether it can issue a PPI allocation request. The “Return Available PPI Credits” command does not result in any PPI allocation or de-allocation. In another novel aspect, the credit-aware device is permitted to issue one PPI allocation request to the packet engine when its recorded “Credits Available” value is zero or negative. If the PPI allocation request cannot be granted, then it is buffered in the packet engine, and is resubmitted within the packet engine, until the packet engine makes the PPI allocation.
Abstract:
A novel allocate instruction and a novel API call are received onto a compiler. The allocate instruction includes a symbol that identifies a non-memory resource instance. The API call is a call to perform an operation on a non-memory resource instance, where the particular instance is indicated by the symbol in the API call. The compiler replaces the API call with a set of API instructions. A linker then allocates a value to be associated with the symbol, where the allocated value is one of a plurality of values, and where each value corresponds to a respective one of the non-memory resource instances. After allocation, the linker generates an amount of executable code, where the API instructions in the code: 1) are for using the allocated value to generate an address of a register in the appropriate non-memory resource instance, and 2) are for accessing the register.
Abstract:
A pipelined run-to-completion processor has a special tripwire bus port and executes a novel tripwire instruction. Execution of the tripwire instruction causes the processor to output a tripwire value onto the port during a clock cycle when the tripwire instruction is being executed. A first multi-bit value of the tripwire value is data that is output from registers, and/or flags, and/or pointers, and/or data values stored in the pipeline. A field of the tripwire instruction specifies what particular stored values will be output as the first multi-bit value. A second multi-bit value of the tripwire value is a number that identifies the particular processor that output the tripwire value. The processor has a TE enable/disable control bit. This bit is programmable by a special instruction to disable all tripwire instructions. If disabled, a tripwire instruction is fetched and decoded but does not cause the output of a tripwire value.
Abstract:
A bit stream having non-deterministic entropy is generated by a Self-Timed Logic Entropy Bit Stream Generator (STLEBSG). The STLEBSG includes an incrementer and a linear feedback shift register (LFSR), both implemented in self-timed logic as parts of an asynchronous state machine. In response to a command, the incrementer asynchronously increments a number of times and then stops, where the number of times is determined by command. For each increment of the incrementer, the LFSR undergoes a state transition. As the incrementer increments, the LFSR outputs the bit stream. If the command is a run repeatedly command, then after the incrementer stops the incrementer is reinitialized and then again increments the number of times. This incrementing, stopping, reinitializing, and incrementing process is repeated indefinitely. Another command causes the incrementer to be loaded. Another command causes the LFSR to be loaded.
Abstract:
A general purpose PicoEngine Multi-Processor (PEMP) includes a hierarchically organized pool of small specialized picoengine processors and associated memories. A stream of data input values is received onto the PEMP. Each input data value is characterized, and from the characterization a task is determined. Picoengines are selected in a sequence. When the next picoengine in the sequence is available, it is then given the input data value along with an associated task assignment. The picoengine then performs the task. An output picoengine selector selects picoengines in the same sequence. If the next picoengine indicates that it has completed its assigned task, then the output value from the selected picoengine is output from the PEMP. By changing the sequence used, more or less of the processing power and memory resources of the pool is brought to bear on the incoming data stream. The PEMP automatically disables unused picoengines and memories.