Abstract:
A device has physical network interface port through which a user can monitor and configure the device. A backend process and a virtual machine (VM) execute on a host operating system (OS). A front end user interface process executes on the VM, and is therefore compartmentalized in the VM. There is no front end user interface executing on the host OS outside the VM. The only management access channel into the device is via a first communication path through the physical network interface port, to the VM, up the VM's stack, and to the front end process. If the backend process is to be instructed to take an action, then the front end process forwards an application layer instruction to the backend process via a second communication path. The instruction passes down the VM stack, across a virtual secure network link, up the host stack, and to the backend process.
Abstract:
A hardware trie structure includes a tree of internal node circuits and leaf node circuits. Each internal node is configured by a corresponding multi-bit node control value (NCV). Each leaf node can output a corresponding result value (RV). An input value (IV) supplied onto input leads of the trie causes signals to propagate through the trie such that one of the leaf nodes outputs one of the RVs onto output leads of the trie. In a transactional memory, a memory stores a set of NCVs and RVs. In response to a lookup command, the NCVs and RVs are read out of memory and are used to configure the trie. The IV of the lookup is supplied to the input leads, and the trie looks up an RV. A non-final RV initiates another lookup in a recursive fashion, whereas a final RV is returned as the result of the lookup command.
Abstract:
A microcontroller system may include a microcontroller having a processor and a first memory, a memory bus and a second memory in communication with the microcontroller via the memory bus. The first memory may include instructions for accessing a first data set from a contiguous memory block in the second memory. The first data set may include a first word having a first value and a plurality of first other words. The first memory may include instructions for receiving a write instruction including a second data set to be written to the contiguous memory block. The first memory may include instructions for determining whether the first value equals the second value. If so, the first memory may include instructions for writing the second data set to the contiguous memory block and updating the first value.
Abstract:
A transactional memory (TM) receives an Atomic Metering Command (AMC) across a bus from a processor. The command includes a memory address and a meter pair indicator value. In response to the AMC, the TM pulls an input value (IV). The TM uses the memory address to read a word including multiple credit values from a memory unit. Circuitry within the TM selects a pair of credit values, subtracts the IV from each of the pair of credit values thereby generating a pair of decremented credit values, compares the pair of decremented credit values with a threshold value, respectively, thereby generating a pair of indicator values, performs a lookup based upon the pair of indicator values and the meter pair indicator value, and outputs a selector value and a result value that represents a meter color. The selector value determines the credit values written back to the memory unit.
Abstract:
Software update information is communicated to a network appliance either across a network or from a local memory device. The software update information includes kernel data, application data, or indicator data. The network appliance includes a first storage device, a second storage device, an operating memory, a central processing unit (CPU), and a network adapter. First and second storage devices are persistent storage devices. In a first example, both kernel data and application data are updated in the network appliance in response to receiving the software update information. In a second example, only the kernel data is updated in the network appliance in response to receiving the software update information. In a third example, only the application data is updated in the network appliance in response to receiving the software update information. Indicator data included in the software update information determines the data to be updated in the network appliance.
Abstract:
A network appliance includes a network processor and several processing units. Packets a flow pair are received onto the network appliance. Without performing deep packet inspection on any packet of the flow pair, the network processor analyzes the flows, estimates therefrom the application protocol used, and determines a predicted future time when the next packet will likely be received. The network processor determines to send the next packet to a selected one of the processing units based in part on the predicted future time. In some cases, the network processor causes a cache of the selected processing unit to be preloaded shortly before the predicted future time. When the next packet is actually received, the packet is directed to the selected processing unit. In this way, packets are directed to processing units within the network appliance based on predicted future packet arrival times without the use of deep packet inspection.
Abstract:
An island-based integrated circuit includes a configurable mesh data bus. The data bus includes four meshes. Each mesh includes, for each island, a crossbar switch and radiating half links. The half links of adjacent islands align to form links between crossbar switches. A link is implemented as two distributed credit FIFOs. In one direction, a link portion involves a FIFO associated with an output port of a first island, a first chain of registers, and a second FIFO associated with an input port of a second island. When a transaction value passes through the FIFO and through the crossbar switch of the second island, an arbiter in the crossbar switch returns a taken signal. The taken signal passes back through a second chain of registers to a credit count circuit in the first island. The credit count circuit maintains a credit count value for the distributed credit FIFO.
Abstract:
An appliance receives packets that are part of a flow pair, each packet sharing an application protocol. The appliance determines the application protocol of the packets by performing deep packet inspection (DPI) on the packets. Packet sizes are measured and converted into packet size states. Packet size states, packet sequence numbers, and packet flow directions are used to create an application protocol estimation table (APET). The APET is used during normal operation to estimate the application protocol of a flow pair without performing time consuming DPI. The appliance then determines inter-packet intervals between received packets. The inter-packet intervals are converted into inter-packet interval states. The inter-packet interval states and packet sequence numbers are used to create an inter-packet interval prediction table. The appliance then stores an inter-packet interval prediction table for each application protocol. The inter-packet interval prediction table is used during operation to predict the inter-packet interval between packets.
Abstract:
A transactional memory (TM) includes a control circuit pipeline and an associated memory unit. The memory unit stores a plurality of rings. The pipeline maintains, for each ring, a head pointer and a tail pointer. A ring operation stage of the pipeline maintains the pointers as values are put onto and are taken off the rings. A put command causes the TM to put a value into a ring, provided the ring is not full. A get command causes the TM to take a value off a ring, provided the ring is not empty. A put with low priority command causes the TM to put a value into a ring, provided the ring has at least a predetermined amount of free buffer space. A get from a set of rings command causes the TM to get a value from the highest priority non-empty ring (of a specified set of rings).
Abstract:
An island-based network flow processor (IB-NFP) integrated circuit includes islands organized in rows. A configurable mesh event bus extends through the islands and is configured to form a local event ring. The configurable mesh event bus is configured with configuration information received via a configurable mesh control bus. The local event ring involves event ring circuits and event ring segments. In one example, a packet is received onto a first island. If an amount of a processing resource (for example, memory buffer space) available to the first island is below a threshold, then an event packet is communicated from the first island to a second island via the local event ring. In response, the second island causes a third island to communicate via a command/push/pull data bus with the first island, thereby increasing the amount of the processing resource available to the first island for handing incoming packets.