摘要:
In a distributed networking environment employing several general purpose processors (i.e., control point processors) for controlling one or more network processor devices, a mechanism for distributing processing across several general purpose processors and interface for configuring a network processor so that specific general purpose processors handle specific operations in a large networking environment, thus, reducing requirement for provisioning a plurality of protocol stacks on each general purpose processor.
摘要:
Certain Layer 3 protocol data frames propagated on a network are typically processed by a control point (CP) in a network switch. The logical bridging and routing functions required in this processing typically entail network device address look-ups in routing tables and address databases. Using the CP to perform these look-ups is expensive in terms of processor cycles and memory. To offload the CP, the bridging functions are performed a network processor in the switch. The network processor has specialized software and hardware enabling it to perform the required database look-ups faster and more efficiently than the CP.
摘要:
A multicast processor minimizes the software resource needed to process multicast protocol and broadcast protocol for bridges and routers in a network processor based environment. The multicast forwarding processor receives multicast and broadcast Layer 2/Layer 3/Layer 4 (L2/L3/L4) frames from a network processor. During reception, a frame layer flag, a unicast/multicast flag, and a frame position flag are set. A multitask forwarding table is accessed, and the frame, unicast/multicast, and frame position flags are stored and updated. The frame, unicast/multicast, and frame position flags are then sent to a frame forwarding processor. The L2/L3/L4 frames are routed to an L2 learning processor. The L2/L3/L4 frames are received from the frame forwarding processor, and the L2/L3/L4 frames are sent to an L3/L4 processor for frame header modification. The modified L2/L3/L4 frames are received from said L3/L4 processor, and the modified L2/L3/L4 frames are sent to an L2 filter processor.
摘要:
A method and apparatus for processing network frames by embedding control information achieves an efficient frame processing system within a network processor (NP). The layer type of the frame can be quickly determined by the layer processing components of picocode running on the NP by examining control information that is written by ingress processing layers to produce a modified frame format. The frames are routed to appropriate layer processors and processing for certain layers may be bypassed if the picocode determines that no processing is required at that layer. The frame may also be discarded completely by any of the layer processors.
摘要:
Disclosed is a method and system for validating a data packet by a network processor supporting a first network protocol and a second network protocol and utilizing shared hardware. The network processor receives a data packet; identifies a network packet protocol for the data packet; and processes the data packet according to the network packet protocol comprising: updating a first register with a first partial packet length specific to the first network protocol; updating a second register with a second partial packet length specific to the second network protocol; and updating a third register with a first checksum computed from fields independent of the network protocol. The system produces a second checksum utilizing a function that combines values from the first register, the second register, and the third register. The system validates the data packet by comparing the data packet checksum to the second checksum.
摘要:
A network packet includes a packet key that includes one or more source-destination field pairs that each include a source field and a destination field. For each selected source-destination field pair, first and second sections are selected in the packet key. A source field value is extracted from the source field and a destination field value is extracted from the destination field. For each source bit of the source field value: a destination bit is selected from the destination field; an OR logic function is applied to the source bit and the destination bit to generate a first resulting value is stored at the same bit position as the source bit in the first section; an AND logic function is applied to the source bit and the destination bit to generate a second resulting value stored at the same bit position as the source bit in the second section.
摘要:
Apparatuses and methods to manage a global forwarding table in a distributed switch are provided. A particular method may include managing a global forwarding table in a distributed switch. The distributed switch may include a plurality of switch forwarding units. The method may start a timer for an entry in the global forwarding table, and the entry may include a multicast destination address and corresponding multicast membership information. The method may also, in response to expiration of the timer of the entry, check at least one hit status to determine whether at least one switch forwarding unit of the plurality of switch forwarding units has forwarded multicast data to the corresponding multicast membership information of the multicast destination address of the entry. The method may further determine whether the entry is a cast-out candidate based on the hit status.
摘要:
According to embodiments of the invention, there is provided a method for operating a network processor. The network processor receiving a first data packet in a stream of data packets and a set of receive-queues adapted to store receive data packets. The network processor processing the first data packet by reading a flow identification in the first data packet; determining a quality of service for the first data packet; mapping the flow identification and the quality of service into an index for selecting a first receive-queue for routing the first data packet; and utilizing the index to route the first data packet to the first receive-queue.
摘要:
A mechanism is provided for merging in a network processor results from a parser and results from an external coprocessor providing processing support requested by said parser. The mechanism enqueues in a result queue both parser results needing to be merged with a coprocessor result and parser results which have no need to be merged with a coprocessor result. An additional queue is used to enqueue the addresses of the result queue where the parser results are stored. The result from the coprocessor is received in a simple response register. The coprocessor result is read by the result queue management logic from the response register and merged to the corresponding incomplete parser result read in the result queue at the address enqueued in the additional queue.
摘要:
A host Ethernet adapter (HEA) and method of managing network communications is provided. The HEA includes a host interface configured for communication with a multi-core processor over a processor bus. The host interface comprises a receive processing element including a receive processor, a receive buffer and a scheduler for dispatching packets from the receive buffer to the receive processor; a send processing element including a send processor and a send buffer; and a completion queue scheduler (CQS) for dispatching completion queue elements (CQE) from the head of the completion queue (CQ) to threads of the multi-core processor in a network node mode. The method comprises operatively coupling an Ethernet adapter to a multi-core processor system via a processor bus, selectively assigning a first plurality of packets to a first queue pair for servicing in an endpoint mode, running a device driver on the multi-core processing system, the device driver controlling the servicing of the first queue pair by dispatching the first plurality of packets to only one processor core of the multi-core processor system, selectively assigning a second plurality of packets to a second queue pair for servicing in a network node mode; and the Ethernet adapter controlling the servicing of the second queue pair by dispatching the second plurality of packets to multiple processor threads.