摘要:
Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate threads of contexts corresponding to tasks received by the packet classifier from a plurality of processing modules of the network processor. A multi-thread instruction engine processes instructions corresponding to threads received from the scheduler. The multi-thread instruction engine executes instructions by fetching an instruction of the thread from an instruction memory of the packet classifier and determining whether a breakpoint mode of the network processor is enabled. If the breakpoint mode is enabled, and breakpoint indicator of the fetched instruction is set, the packet classifier enters a breakpoint mode. Otherwise, if the breakpoint indicator of the fetched instruction is not set, the multi-thread instruction engine executes the fetched instruction.
摘要:
Described embodiments provide address translation for data stored in at least one shared memory of a network processor. A processing module of the network processor generates tasks corresponding to each of a plurality of received packets. A packet classifier generates contexts for each task, each context associated with a thread of instructions to apply to the corresponding packet. A first subset of instructions is stored in a tree memory within the at least one shared memory. A second subset of instructions is stored in a cache within a multi-thread engine of the packet classifier. The multi-thread engine maintains status indicators corresponding to the first and second subsets of instructions within the cache and the tree memory and, based on the status indicators, accesses a lookup table while processing a thread to translate between an instruction number and a physical address of the instruction in the first and second subset of instructions.
摘要:
Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate threads of contexts corresponding to tasks received by the packet classifier from a plurality of processing modules of the network processor. A multi-thread instruction engine processes instructions corresponding to threads received from the scheduler. The multi-thread instruction engine executes instructions by fetching an instruction of the thread from an instruction memory of the packet classifier and determining whether a breakpoint mode of the network processor is enabled. If the breakpoint mode is enabled, and breakpoint indicator of the fetched instruction is set, the packet classifier enters a breakpoint mode. Otherwise, if the breakpoint indicator of the fetched instruction is not set, the multi-thread instruction engine executes the fetched instruction.
摘要:
Technologies for a distributed hardware queue manager include a compute device having a procesor. The processor includes two or more hardware queue managers as well as two or more processor cores. Each processor core can enqueue or dequeue data from the hardware queue manager. Each hardware queue manager can be configured to contain several queue data structures. In some embodiments, the queues are addressed by the processor cores using virtual queue addresses, which are translated into physical queue addresses for accessing the corresponding hardware queue manager. The virtual queues can be moved from one physical queue in one hardware queue manager to a different physical queue in a different physical queue manager without changing the virtual address of the virtual queue.
摘要:
Improved techniques are disclosed for performing an in-service upgrade of software associated with a network or packet processor. By way of example, a method of managing data structures associated with code executable on a packet processor includes the following steps. Data structures in the code are identified as being one of static data structures and non-static data structures, wherein a static data structure includes a data structure that is not changed during execution of the packet processor code and a non-static data structure includes a data structure that is changed during execution of the packet processor code. One or more data structures associated with the packet processor code are managed in a manner specific to the identification of the one or more data structures as static data structures or non-static data structures. At least a portion of the data structures may include tree structures.
摘要:
Improved techniques are disclosed for performing an in-service upgrade of software associated with a network or packet processor. By way of example, a method of performing an in-service upgrade of code, storable in a memory associated with a packet processor and executable on the packet processor, from a first code version to a second code version, includes the following steps. A first step includes preparing for the upgrade by generating one or more write operations to effectuate the code upgrade from the first code version to the second code version. A second step includes updating the code from the first code version to the second code version by propagating the one or more write operations to the packet processor. A third step includes cleaning up after the updating step by reclaiming one or more memory locations available after the update step. As such, the storage of only a single version of the code in the memory associated with the packet processor is required.
摘要:
Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
摘要:
Described embodiments provide address translation for data stored in at least one shared memory of a network processor. A processing module of the network processor generates tasks corresponding to each of a plurality of received packets. A packet classifier generates contexts for each task, each context associated with a thread of instructions to apply to the corresponding packet. A first subset of instructions is stored in a tree memory within the at least one shared memory. A second subset of instructions is stored in a cache within a multi-thread engine of the packet classifier. The multi-thread engine maintains status indicators corresponding to the first and second subsets of instructions within the cache and the tree memory and, based on the status indicators, accesses a lookup table while processing a thread to translate between an instruction number and a physical address of the instruction in the first and second subset of instructions.
摘要:
Techniques are disclosed for generating a representation of an access control list, the representation being utilizable in a network processor or other type of processor to perform packet filtering or other type of access control list based function. A plurality of rules of the access-control list are determined, each of at least a subset of the rules having a plurality of fields and a corresponding action, and the rules are processed to generate a multi-level tree representation of the access control list, in which each of one or more of the levels of the tree representation is associated with a corresponding one of the fields. At least one level of the tree representation other than a root level of the tree representation comprises a plurality of nodes, with at least two of the nodes at that level each having a separate matching table associated therewith.
摘要:
Techniques are disclosed for generating a representation of an access control list, the representation being utilizable in a network processor or other type of processor to perform packet filtering or other type of access control list based function. A plurality of rules of the access control list are determined, each of at least a subset of the rules having a plurality of fields and a corresponding action. The rules are processed to generate a multi-level tree representation of the access control list, in which each of one or more of the levels of the tree representation is associated with a corresponding one of the fields. At least one level of the tree representation comprises a plurality of nodes, with two or more of the nodes of that level having a common subtree, and the tree representation including only a single copy of that subtree. The tree representation is characterizable as a directed graph in which each of the two nodes having the common subtree points to the single copy of the common subtree.