Abstract:
A Island-Based Network Flow Processor (IBNFP) includes a memory and a processor located on a first island, a Direct Memory Access (DMA) controller located on a second island, and an Interlaken Look-Aside (ILA) interface circuit and an interface circuit located on a third island. A search key data set including multiple search keys is stored in the memory. A descriptor is generated by the processor and is sent to the DMA controller, which generates a search key data request, receives the search key data set, and selects a single search key. The ILA interface circuit receives the search key, generates and ILA packet including the search key that is sent to an external transactional memory device that generates a result data value. The DMA controller receives the result data value via the ILA interface circuit, writes the result data value to the memory, and sends a DMA completion notification.
Abstract:
Circuitry to provide in-order packet delivery. A packet descriptor including a sequence number is received. It is determined in which of three ranges the sequence number resides. Depending, at least in part, on the range in which the sequence number resides it is determined if the packet descriptor is to be communicated to a scheduler which causes an associated packet to be transmitted. If the sequence number resides in a first “flush” range, all associated packet descriptors are output. If the sequence number resides in a second “send” range, only the received packet descriptor is output. If the sequence number resides in a third “store and reorder” range and the sequence number is the next in-order sequence number the packet descriptor is output; if the sequence number is not the next in-order sequence number the packet descriptor is stored in a buffer and a corresponding valid bit is set.
Abstract:
A transactional memory receives a command, where the command includes an address and a novel DAT (Do Address Translation) bit. If the DAT bit is set and if the transactional memory is enabled to do address translations and if the command is for an access (read or write) of a memory of the transactional memory, then the transactional memory performs an address translation operation on the address of the command. Parameters of the address translation are programmable and are set up before the command is received. In one configuration, certain bits of the incoming address are deleted, and other bits are shifted in bit position, and a base address is ORed in, and a padding bit is added, thereby generating the translated address. The resulting translated address is then used to access the memory of the transactional memory to carry out the command.
Abstract:
A multi-processor includes a pool of processors and a common packet buffer memory. Bytes of packet data of a packet are stored in the packet buffer memory. Each processor has an intelligent packet data register file. One processor is tasked with processing the packet, and its packet data register file caches a subset of the bytes. If the register file detects a packet data prefetch trigger condition, and it does not store some of the bytes in a prefetch window, then it prefetches the bytes before such bytes are required in the execution of a subsequent instruction. The processor has instructions that configure the prefetching, that enable such prefetching, and that disable such prefetching in certain ways.
Abstract:
A pipelined run-to-completion processor can decode three instructions in three consecutive clock cycles, and can also execute the instructions in three consecutive clock cycles. The first instruction causes the ALU to generate a value which is then loaded due to execution of the first instruction into a register of a register file. The second instruction accesses the register and loads the value into predicate bits in a register file read stage. The predicate bits are loaded in the very next clock cycle following the clock cycle in which the second instruction was decoded. The third instruction is a conditional instruction that uses the values of the predicate bits as a predicate code to determine a predicate function. If a predicate condition (as determined by the predicate function as applied to flags) is true then an instruction operation of the third instruction is carried out, otherwise it is not carried out.
Abstract:
A remote processor interacts with a transactional memory that has a memory, local BWC (Byte-Wise Compare) resources, and local NFA (Non-deterministic Finite Automaton) engine resources. The processor causes a byte stream to be transferred into the transactional memory and into the memory. The processor then uses the BWC circuit to find a character signature in the byte stream. The processor obtains information about the character signature from the BWC circuit, and based on the information uses the NFA engine to process the byte stream starting at a byte position determined based at least in part on the results of the BWC circuit. From the time the byte stream is initially written into the transactional memory until the time the NFA engine completes, the byte stream is not read out of the transactional memory.
Abstract:
A multi-processor includes a pool of processors and a common packet buffer memory. Bytes of packet data of a packet are stored in the packet buffer memory. Each processor has an intelligent packet data register file. One processor is tasked with processing the packet, and its packet data register file caches a subset of the bytes. Some instructions when executed require that the packet data register file supply the execute stage of the processor with certain bytes of the packet data. The register file detects a packet data prefetch trigger condition, and in response determines if it does not store some of the bytes in a prefetch window. If it does not, then it retrieves those bytes from the packet buffer memory, so that it then has all the bytes in the prefetch window. In one example, a subsequently executed instruction uses the prefetched packet data.
Abstract:
A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a memory address, a starting bit position, and a mask size. In response to the command, the TM pulls an input value (IV). The memory address is used to read a word containing multiple result values (RVs) and multiple key values from memory. Each key value indicates a single RV to be output by the TM. A selecting circuit within the TM uses the starting bit position and mask size to select a portion of the IV. The portion of the IV is a key selector value. A key value is selected based upon the key selector value. A RV is selected based upon the key value. The key value is selected by a key selection circuit. The RV is selected by a result value selection circuit.
Abstract:
A device has physical network interface port through which a user can monitor and configure the device. A backend process and a virtual machine (VM) execute on a host operating system (OS). A front end user interface process executes on the VM, and is therefore compartmentalized in the VM. There is no front end user interface executing on the host OS outside the VM. The only management access channel into the device is via a first communication path through the physical network interface port, to the VM, up the VM's stack, and to the front end process. If the backend process is to be instructed to take an action, then the front end process forwards an application layer instruction to the backend process via a second communication path. The instruction passes down the VM stack, across a virtual secure network link, up the host stack, and to the backend process.
Abstract:
A method for receiving a packet descriptor including a priority indicator and a queue number indicating a queue stored within a first memory unit, storing a packet associated with the packet descriptor in a second memory, determining a first amount of free memory in the first memory unit, determining if the first amount of free memory is above a threshold value, writing the packet from the second memory to a third memory when the first amount of memory is above the threshold value and the priority indicator is equal to a first value, not writing the packet from the second memory unit to the third memory unit if the first amount of memory is below the threshold value or when the priority indicator is equal to a second value. The priority indicator is equal to a first value for high priority packets and a second value for low priority packets.